DribbleBot From MIT Designed to Play Soccer on Varied Terrains

The new system enables a quadruped robot to dribble a soccer ball on landscapes such as sand, gravel, mud, and snow, using reinforcement learning to adapt to varying ball dynamics

MIT


DribbleBot was trained in simulation to negotiate varied terrains and dribble a soccer ball.
DribbleBot is a quadruped system MIT trained in simulation to dribble a soccer ball in varied terrains as a step toward robots that could help in disaster response.

If you ever play soccer against a robot, the approach of an opponent dribbling with determination would feel familiar, according to MIT researchers, who have developed a new quadruped system.

While a human-robot soccer game is not a common occurrence, nor is it close to a well-matched adversary to Lionel Messi, the dribbling system is impressive in the wild, noted the Massachusetts Institute of Technology. Researchers at MIT's Improbable Artificial Intelligence Lab, part of its Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a legged robot that can dribble a soccer ball under the same conditions as humans.

The robot used a mixture of onboard sensing and computing to traverse different natural terrains such as sand, gravel, mud, and snow. It could also adapt to their varied impact on the ball’s motion. Like any committed athlete, “DribbleBot” could get up and recover the ball after falling.

MIT team turns to simulation to train robot

Programming robots to play soccer has been an active research area for some time. However, MIT's team wanted to automatically learn how to actuate the legs during dribbling. Their intent was to enable the discovery of hard-to-script skills for responding to diverse terrain. Enter simulation.

A robot, ball, and terrain are inside a digital twin of the natural world. The user can load in the bot and other assets and set physics parameters, and then it handles the forward simulation of the dynamics from there.

In true Rick and Morty fashion, 4,000 versions of the robot are simulated in parallel in real time, enabling data collection 4,000 times faster than using just one robot. That was a lot of data, said MIT.

The robot starts without knowing how to dribble the ball—it just receives a reward when it does or negative reinforcement when it messes up, noted the researchers. So it's essentially trying to figure out what sequence of forces it should apply with its legs.

“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” said Gabe Margolis, a Ph.D. student at MIT. He co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab.

“Once we've designed that reward, then it's practice time for the robot,” Margolis explained. “In real time, it's a couple of days, and in the simulator, hundreds of days. Over time, it learns to get better and better at manipulating the soccer ball to match the desired velocity.”

The bot could also navigate unfamiliar terrains and recover from falls thanks to a recovery controller the team built into its system. This controller lets the robot get back up after a fall and switch back to its dribbling controller to continue pursuing the ball, helping it handle out-of-distribution disruptions and terrains.

“If you look around today, most robots are wheeled,” said Pulkit Agrawal, MIT professor, CSAIL principal investigator, and director of Improbable AI Lab. “But imagine that there's a disaster scenario, flooding, or an earthquake, and we want robots to aid humans in the search and rescue process. We need the machines to go over terrains that aren't flat, and wheeled robots can't traverse those landscapes.”

“The whole point of studying legged robots is to go terrains outside the reach of current robotic systems,” Agrawal added. “Our goal in developing algorithms for legged robots is to provide autonomy in challenging and complex terrains that are currently beyond the reach of robotic systems.”

Gabe Margolis and DribbleBot

Ph.D. student Gabe Margolis and DribbleBot. Source: MIT

Run DribbleBot, run!

The fascination with robot quadrupeds and soccer runs deep—Canadian professor Alan Mackworth first noted the idea in a paper entitled “On Seeing Robots,” presented at VI-92, 1992.

Japanese researchers later organized a workshop on “Grand Challenges in Artificial Intelligence,” which led to discussions about using soccer to promote science and technology. The project was launched as the Robot J-League a year later, and global fervor quickly ensued. Shortly after that, “RoboCup” was born.

Compared with walking alone, dribbling a soccer ball imposes more constraints on DribbleBot's motion and what terrains it can traverse. The robot must adapt its locomotion to apply forces to the ball to dribble.

The interaction between the ball and the landscape could be different than the interaction between the robot and the landscape, such as thick grass or pavement. For example, a soccer ball will experience a drag force on grass that is not present on pavement, and an incline will apply an acceleration force, changing the ball’s typical path.

However, the robot's ability to traverse different terrains is often less affected by these differences in dynamics — as long as it doesn't slip — so the soccer test can be sensitive to variations in terrain that locomotion alone isn't, said MIT.

Sensors enable quadruped to dribble

“Past approaches simplify the dribbling problem, making a modeling assumption of flat, hard ground,” said Ji. “The motion is also designed to be more static; the robot isn’t trying to run and manipulate the ball simultaneously.”

“That's where more difficult dynamics enter the control problem,” he said. “We tackled this by extending recent advances that have enabled better outdoor locomotion into this compound task which combines aspects of locomotion and dexterous manipulation together.”

On the hardware side, the robot has a set of sensors that let it perceive the environment, allowing it to feel where it is, “understand” its position, and “see” some of its surroundings. It has a set of actuators that let it apply forces and move itself and objects.

In between the sensors and actuators sits the computer or “brain,” tasked with converting sensor data into actions, which it will apply through the motors. When the robot is running on snow, it doesn't see the snow but can feel it through its motor sensors.

But soccer is a trickier feat than walking—so MIT's team used cameras on the robot's head and body for a new sensory modality of vision, in addition to the new motor skill. And then the robot can dribble.

“Our robot can go in the wild because it carries all its sensors, cameras, and compute on board,” said Margolis. “That required some innovations in terms of getting the whole controller to fit onto this onboard compute. That's one area where learning helps because we can run a lightweight neural network and train it to process noisy sensor data observed by the moving robot.”

“This is in stark contrast with most robots today,” he added. “Typically, a robot arm is mounted on a fixed base and sits on a workbench with a giant computer plugged right into it. Neither the computer nor the sensors are in the robotic arm! So the whole thing is weighty, hard to move around.”

DribbleBot from MIT CSAIL

CSAIL's Improbable AI Lab has developed DribbleBot. Source: MIT

More work to do

There's still a long way to go in making these robots as agile as their counterparts in nature, and some terrains were challenging for DribbleBot, acknowledged the MIT researchers.

Currently, the controller is not trained in simulated environments that include slopes or stairs. The robot doesn't perceive the geometry of the terrain; it's only estimating its material contact properties, like friction.

If there's a step up, for example, the robot will get stuck—it won't be able to lift the ball over the step, an area of future work the team wants to explore.

The researchers said they are also excited to apply lessons learned during development of DribbleBot to other tasks that involve combined locomotion and object manipulation, quickly transporting diverse objects from place to place using the legs or arms.

The research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation (NSF) Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. The paper will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).

A soccer-playing robot equipped for various terrains from MIT.

Email Sign Up

Get news, papers, media and research delivered
Stay up-to-date with news and resources you need to do your job. Research industry trends, compare companies and get market intelligence every week with Robotics 24/7. Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.

MIT

DribbleBot was trained in simulation to negotiate varied terrains and dribble a soccer ball.


Robot Technologies