Researchers are developing robots can visually learn any movement

Researchers are developing robots can visually learn any movement

Robots soon might mix ingredients, bake cakes, make salads and flip burgers better than any human. And they’re learning to do this by watching cooking videos on YouTube — as we ­humans habitually do. Teaching robots to cook or perform other talks by watching videos is the aim of a research ­project being conducted jointly by local digital research body ­National ICT Australia and the University of Maryland.

                        

Researchers are developing robots that can visually learn any movement ­sequence then repeat it. Picture credit: John T. Consoli. Source: Supplied

It’s not getting robots to bake cakes that are so important to ­researchers; it’s giving robots an ability to learn any movement ­sequence and repeat it. Scientists effectively are teaching robots to learn any functionality from the real, physical world. Using video is the first step. In Australia, the project is being led by Dr Yi Li, NICTA senior researcher in computer ­vision. He said a robot learnt movements and identified objects such as a mixing bowl or spatula by analyzing pixels in the video downloaded to it. It then repeated the behavior.

Dr Li said the research was a forerunner to developing a domestic robot that could learn new skills in the home. So far the cooking robot had “watched” 88 videos, and by identifying tasks, had learnt how to pick up and turn around objects in the kitchen, pour, mix, and flip a flat object like a burger, Dr Li said.

But it can’t yet execute an entire recipe. For example, the robot’s hand is yet to be capable of cutting objects. But it is early days. Dr Li said the robot in future might be taught to clean rooms, pack things up, clean up your desk and do the housework. “Housework would be our main focus at the moment. Actually housework is more complicated than you think,” Dr Li said.

He said the robot tried not only to identify movements but also the purpose of each action being carrying out. “Usually researchers try to replicate what the people are doing in the video. They didn't try to understand the goal of an ­action. If we can understand some high-level purpose, we can have different implementations, different variations,” Dr Li said.

He said everything in a household had a purpose. “You want to cook (a steak) to medium-rare, that’s a purpose.” The robot also could be taught to water the plants and to feed the fish when you went on holiday. Dr Li estimated this functionality was five to 10 years away although other research in similar areas might mean it would happen sooner. He said a learning robot could obviously also be used in ­industry — instead of just watering your plants it could water an entire farm.

Dr Li said he and his colleagues at the University of Maryland had the idea of developing a deep- learning robot before he joined NICTA four years ago. Initially he was involved in NICTA’s bionic eye project which also involved analysing pixels to detect objects and movement. The robot research was presented this month in Austin, Texas, at the annual conference of the Association for the Advancement of Artificial Intelligence. (Source: theaustralian.com)

0 comments :

Post a Comment