Engineers at the California Institute of Technology (Caltech) have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.
Multi-robot motion coordination is regarded as a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments.
According to Caltech, two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex and more prone to collisions.
To overcome these challenges, Soon-Jo Chung, Bren Professor of aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière, postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called Global-to-Local Safe Autonomy Synthesis (GLAS), which imitates a complete-information planner with only local information, and Neural-Swarm, a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.
“Our work shows some promising results to overcome the safety, robustness, and scalability issues of conventional black-box artificial intelligence approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm,” said Chung.
When GLAS and Neural-Swarm are used, a robot does not require a complete and comprehensive picture of the environment that it is moving through, or of the path its fellow robots intend to take, according to Caltech. Instead, robots learn how to navigate through a space on the fly, and incorporate new information as they go into a ‘learned model’ for movement. As each robot in a swarm only requires information about its local surroundings, decentralised computation can be done, which makes it easier to scale up the size of the swarm.
“These projects demonstrate the potential of integrating modern machine-learning methods into multi-agent planning and control, and also reveal exciting new directions for machine-learning research,” said Yue.
To test their new systems, Chung’s and Yue’s teams implemented GLAS and Neural-Swarm on quadcopter swarms of up to 16 drones and flew them in the open-air drone arena at Caltech’s Center for Autonomous Systems and Technologies. The teams said they found that GLAS could outperform the current state-of-the-art multi-robot motion-planning algorithm by 20% in a wide range of cases.