Google DeepMind’s new AI robotic performs desk tennis at “human stage”

China is currently busy collecting the most gold medals in table tennis at the Paris Olympics, while an AI-controlled robot from Google DeepMind has achieved an “amateur-level performance” in the sport.

In a study conducted in a Arxiv article This week, Google's artificial intelligence subsidiary explained how the robot works and provided footage of it taking on what we can only assume are willing and enthusiastic table tennis players of varying skill levels.

According to DeepMind, the batting robot had to master both simple skills like returning the ball and more complex tasks like long-term planning and strategy development. It also played against opponents with different playing styles, drawing on huge amounts of data to refine and adapt its approach.

Not quite at Olympic level yet

The robotic arm – and its 3D-printed racket – won 13 out of 29 games against human opponents of varying skill levels. It won 100% of games against “beginners” and 55% against “advanced” players. However, it lost every time it faced an “advanced” opponent.

The

The latest gossip from the EU tech scene, a story from our wise old founder Boris and questionable AI art. Free in your inbox every week. Sign up now!

DeepMind said the results of the latest project were a step toward the goal of achieving human-level speed and performance on real-world tasks, a “north star” for the robotics community.

To achieve these goals, the researchers say they have used four applications that could extend the findings beyond hitting a small ball over a tiny net, as difficult as that may be:

  • A hierarchical and modular policy architecture
  • Techniques to enable zero-shot sim-to-real, including an iterative approach to defining the training task distribution based on the real world
  • Real-time adaptation to invisible enemies
  • A user study to test the model by playing actual games against invisible people in physical environments

The company also added that its approach resulted in “a human-level competitive game and a robot agent that humans actually have fun with.” And indeed, the non-robot competitors in the demonstration videos seem to be having fun.

Table tennis robotics

Google DeepMind is not the only robotics company to choose table tennis as a training method for its systems. The sport requires hand-eye coordination, strategic thinking, speed and adaptability, among other skills, making it a good fit for training and testing these skills in AI-controlled robots.

The world's first “table tennis teaching robot” was recognized by Guinness World Records in 2017. The rather impressive machine was developed by the Japanese electronics company OMRON. The latest version is the FORPHEUS (stands for “Future OMRON Robotics technology for Exploring Possibility of Harmonized aUtomation with Sinic theoretics” and is also inspired by the ancient mythological figure Orpheus…).

OMRON says it “embodies the relationship that will exist between humans and technology in the future.”

Google DeepMind isn't making such existential claims for its latest ping-pong champion, but the lessons learned from its development could prove profound for our robot friends in the future. However, we think DeepMind's robotic arm lacks shortcuts.

Comments are closed.