Skip to content

DeepMind AI learns to play soccer using decades of match simulations

An artificial intelligence learned to skillfully control digital humanoid soccer players by working through decades worth of soccer matches in just a few weeks

Technology


31 August 2022

An image of the digita humanoid soccer players

AI learned to control digital humanoid soccer players

Liu et al., Sci. Robot. 7, eabo0235

Artificial intelligence has learned to play soccer. By learning from decades worth of computer simulations, an AI took digital humanoids from flailing tots to proficient players.

Researchers at the AI ​​research company DeepMind trained the AI ​​how to play soccer in a computer simulation through an athletic curriculum resembling a sped-up version of growing a human baby into a football player. The AI ​​was given control over digital humanoids with realistic body mass and joint movements.

“We don’t put infants in an 11 versus 11 match,” says Guy Lever at DeepMind. “They first learn to walk around, then they learn to dribble a ball, then you might play one v one or two v two.”

The first phase of the curriculum trained the digital humanoids how to run naturally by imitating motion-capture video clips of humans playing football. A second phase involved practicing dribbling and shooting the ball through a form of trial-and-error machine learning that rewarded the AI ​​for staying close to the ball.

The first two phases represented about 1.5 years of simulation training time, which the AI ​​sped through in about 24 hours. But more complex behaviors beyond movement and ball control began emerging after five years of simulated soccer matches. “They learned coordination, but also they learned movement skills that we didn’t have explicitly set as training drills before,” says Nicolas Heess at DeepMind.

The third phase of training challenged the digital humanoids to score goals in 2v2 matches. Teamwork skills such as anticipating where to receive a pass emerged over the course of about 20 to 30 years of simulated matches, or the equivalent of two to three weeks in the real world. This led to demonstrated improvements in the digital humanoids’ off-ball scoring opportunity scores, a real-world football analytics measure of player performance.

Such simulations will not immediately lead to flashy football robots. The digital humanoids trained on simplified rules that allowed fouls, provided a wall-like boundary around the pitch and avoided set pieces such as throw-ins or goal kicks.

The long learning times makes the work less easy to directly transfer to real soccer robots, says Sven Behnke at the University of Bonn in Germany. However, it would be interesting to see if DeepMind’s approach is competitive in the annual RoboCup 3D Simulation League, he says.

The DeepMind team has begun teaching real robots how to push a ball toward a target and plans to investigate if the same AI training strategy works beyond soccer.

Journal reference: Science RoboticsDOI: 10.1126/scirobotics.abo0235

More on these topics:

.