From self-driving cars to unmanned aerial vehicles, autonomous robots are emerging in a more prominent role in everyday life, and researcher Ryan Williams has his eye on the role that multi-robot systems will play. He believes a better understanding of autonomous coordination in those systems will be critical to solving real-world problems in the digital age.

Williams, assistant professor in the Bradley Department of Electrical and Computer Engineering, aims to advance multi-robot system theory to allow robots to plan their interactions intelligently, gracefully enter and exit systems, and participate in trustful decision-making processes with other robots and human teammates. His research is grounded in the idea that autonomous coordination is ultimately driven by how robots interact with each other.

Over the next five years, Williams will explore such concepts of multi-robot theory with support from a National Science Foundation Faculty Early Career Development (CAREER) award, given to early-career researchers who have the potential to serve as academic role models in research and education and lead advances in the mission of their department or organization, as described by the NSF.

His team will validate their theoretical thrusts in the context of search and rescue, in which humans and aerial robots collaborate to find a lost person. “In such a setting, with highly trained human searchers in the loop, autonomy that can intelligently share search data in a manner that builds human trust is key to team success,” Williams said. “As we readily observe in human teams, collaboration without trust is often ineffective or even counterproductive.”

The current state of multi-robot interactions can be described as “fragile,” Williams said. This fragility is apparent in the gaps between the robotics community’s theoretical treatment of such systems and their deployment in the field. In theoretical treatments of multi-robot interaction, he explained, there are often broad assumptions that are made to facilitate analysis, but such assumptions miss the mark when it comes to real-world field deployments of systems.

Williams’s work will tackle three realities in multi-robot field deployments: robots must be able to disconnect and reconnect with each other intelligently; robots may enter and exit operation repeatedly during long operations, potentially due to battery depletion or failure; and when robots interact with humans, they must do so in a way that facilitates trust between humans and systems. “The key difficulty one faces when solving these issues is to maintain acceptable multi-robot collaboration under such complicated conditions of interaction,” Williams said.

Williams’s team will run studies on concepts like human searchers interacting with intelligent drones to make decisions on effective search paths through complex environments, and human search coordinators collaborating with an AI-powered decision-making agent to allocate human-robot resources to a large-scale search, e.g., where to search and who searches.

In order to study and implement multi-robot collaboration, Williams’s team will develop new theoretical tools in the areas of combinatorial optimization, planning, control, and deep reinforcement learning. These tools will allow for meaningful analysis of multi-robot collaboration under complex conditions of interaction, as well as the creation of new algorithms for the deployment of multi-robot systems.

“In terms of the search and rescue application, these theoretical tools will allow for the first time a ‘team’ of autonomous drones to work with human searchers in a manner that is robust to the realities of a wilderness deployment, while also building trust with their human teammates,” Williams said.

As part of the project, Williams and his team will build a portable multi-robot testbed to set up search and rescue experiments with multi-robot teams and study their newly-developed algorithms. This multi-scale, indoor-outdoor testbed will comprise unmanned aerial vehicles, unmanned ground vehicles, wireless Internet-of-Things sensors, a field computational cluster, multi-band wireless connectivity, and localization systems. It will also utilize on-campus facilities such as the Beamer-Lawson Indoor Football Practice Facility and the outdoor Drone Park.

Williams plans to bring in search and rescue practitioners and study their use of the team’s developed algorithms during mock lost person searches supported by aerial vehicles.

“It will be crucial to run multi-scale experiments with humans to gauge the effectiveness of our trust-building algorithms as experimental complexity increases,” Williams said of the need for the testbed’s unique, multi-scale design. “A multi-scale testbed allows for smoother transitions of our developed technology from small indoor prototypes to large outdoor mock searches. This leap from prototypes to real-world deployments is often tough, and a single testbed that scales as necessary can help with realizing large-scale experimentation in realistic environments.”

Share this story