DeepMind’s Digital Humanoids Play Soccer to Become More Human

DeepMind’s Digital Humanoids Play Soccer to Become More Human
Published on

Digital humanoids trained to play soccer to help them move more human-like with DeepMind

The Alphabet-backed AI firm, DeepMind is making use of virtual games to help its digital AI humanoids creation move more like humans. Digital humanoids are trained to play soccer to help them move more human-like. DEEPMIND'S pulled out all the stops to teach an AI to play soccer and starting with a virtual player writhing around on the floor—so it nailed at least one aspect of the game right from kick-off.

But forcing the mechanics of the beautiful game—from basics like running and kicking to higher-order concepts like teamwork and tackling—proved a lot more challenging, as new research from the Alphabet-backed AI firm illustrates. The work—published midweek in the journal Science Robotics—might seem flippant, but studying the fundamentals of soccer could one day help robots to move around our world in more natural, more human ways.

Guy Lever, a research scientist at DeepMind says "To 'solve' soccer, one has to solve lots of open problems on the track to artificial general intelligence [AGI]." An AI has to refurbish everything human players do—even the things we don't have to consciously think about, like exactly how to move each limb and muscle to connect with a moving ball—making thousands of decisions a second.

DeepMind's simulated digital humanoid agents were modelled on actual humans, with 56 points of articulation and a constrained range of motion—meaning that they couldn't, for instance, rotate their knee joint through impossible angles à la Zlatan Ibrahimovic. To start with, the researchers simply provided the agents a goal—for example, run, or kick a ball—and let them try and figure out how to get there through trial and error and increase learning, as was done in the past when researchers instructed simulated digital humanoids to navigate obstacle courses.

"This didn't work," says Nicolas Heess, also a research scientist at DeepMind, and one of the paper's co-authors with Lever. Due to the complexity of the problem, the huge range of options available, and the lack of initial knowledge about the task, the agents didn't have any plan where to start—hence the writhing and twitching.

General training was followed by single-player drills: running, dribbling, and kicking the ball, mimicking the way that humans might learn to play a new sport before diving into a full-match situation. The augmentation learning rewards were things like successfully following a target without the ball, or dribbling the ball close to a target. This curriculum of skills was a natural way to build toward increasingly complex tasks, Lever, another research scientist at DeepMind says.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net