AI-Learns-to-Juggle-Dice-using-a-Robotic-Hand
OpenAI sets benchmark in robotic hand movement

The presence of opposable thumbs in human beings has been critical for the advancement of our civilization. Their flexibility and dexterity have enabled us to pick things up, hold them, and maneuver them on the surface of our palm. However, embedding such subtle movements in a robotic hand was considered near impossible. However, with the help of Artificial Intelligence (AI), researchers at OpenAI managed to train their AI, Daryl, to move the fingers of a robot as artfully as a human, setting up a benchmark in hand dexterity.

OpenAI is the brainchild of SpaceX and Tesla founder, Elon Musk. He has been critical of AI in the past, stating, “AI is a fundamental existential risk for human civilization and I don’t think people fully appreciate that.” However, OpenAI was built on the platform of innovating “safe” general AI. The development of the dexterous hand could be seen as a path towards integrating robots in physical activities that require them to hold and move objects using their fingers.

So how exactly did the researchers manage that? The researchers trained Daryl using reinforcement learning; or in simple terms – basic trial and error method. The objective of the training was simple – learn how to place any side of a cube upwards without dropping it. Daryl was given control of the Shadow Dexterous Hand, the closest approximation to human hand currently available in the market. The hand comprised of five fingers, force and touch sensors, and 24 degrees of freedom – three lower than a real human hand.

The training was achieved using domain randomization – where numerous variables, including the size and friction of the cube, the position of the palm and even the gravity affecting the fall of the cube, were randomized. Daryl was rewarded for each movement that brought the cube closer to its goal, while any movement that resulted in the cube to fall down accompanied negative points. Imagine, a child learning how to hold an object up. He will succeed in the task only after trying and failing several times.

Although Daryl took a hundred years in human time to learn the task, the results have definitely set the benchmark for robotic arm movement. In the simulated environment, Daryl managed to move the cube from one position to the next 50 times in a row. In the real-world scenario, the number dropped to a mere 13. Even then, the artfulness of the movement fascinated the researchers of OpenAI as well as from other academic and research institutions.

Antonio Bicchi, a professor of robotics at the Istituto Italiano di Tecnologia, remained critical of Daryl’s ability and pointed out a number of limitations in the experiment. He said, “The result is still limited to a specific task (rolling a die of convenient size) in rather favorable conditions (the hand is facing up, so that the die falls in the palm), and is not even close to be a conclusive argument that these techniques can solve real-world robotics problem.”

The time it took Daryl to learn the technique, hundred human years is also a major limitation for the successful experimentation, as humans usually take little over a couple of years to learn it. This shows that Daryl might not be that intelligent after all. Nonetheless, OpenAI managed to set the benchmark for robotic hand movement and set the precedent for further research into the subject.

4 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here