Researchers from MIT’s Improbable Artificial Intelligence Lab, which is part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a four-legged robot that can dribble a football ‘under the same conditions as humans’.
According to the research team, the bot uses a combination of onboard sensing and computing to traverse a variety of terrains such as sand, gravel, mud, and snow and simultaneously adapt to the environments’ varying impact on the movement of the ball.
The machine, nicknamed ‘DribbleBot’, is also able to get back up after falling.
One of the chief goals of the project was to build a robot that could learn how to actuate the legs during dribbling and to unlock skills that are difficult to programme that can respond to diverse terrains. To do so, the team turned to simulation.
Initially, the robot did not know how to dribble the ball and was made to learn through reinforcement learning. This means the bot received a reward when it performed the action, or negative reinforcement when it made an error.
Such a process enabled it to learn the sequence of forces it needed to apply with its legs.
“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behaviour,” said Gabe Margolis, PhD student at MIT and co-leader of the research, alongside Yandong Ji, research assistant in the Improbable AI Lab.
“Once we’ve designed that reward, then it’s practice time for the robot: In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”
The team built a recovery controller into the bot’s system to help it recover from falls and handle out-of-distribution disruptions and terrains.
Compared to walking alone, dribbling a football offers additional constraints on DribbleBot’s motion and the range of terrains it can traverse. This is because the robot needs to adjust its locomotion to apply forces to the ball to dribble.
What’s more, the interaction between the ball and the landscape could be different than the interaction between the robot and the landscape, such as thick grass or pavement.
Despite the successful demonstration of the robot, there is still progress to be made on its capabilities compared to that of a human, the team has said.
Currently, the controller is not trained in simulated environments that include slopes or stairs, meaning the robot cannot perceive the geometry of the terrain and is only estimating its material contact properties, like friction. One small set of stairs means the robot will get stuck and won’t be able to lift the ball over the step.
The researchers hope to work on this area in the future and explore how to apply lessons learned during the development of DribbleBot to other task that involve combined locomotion and object manipulation, such as quickly transporting objects from place to place using its legs or arms.