A new robot can now identify objects by touch

Scientists at the school's Computer Science and Artificial Intelligence Laboratory (CSAIL) are trying to give a robot the literal feels by training it to sense the tactile nuances of an object's texture and to help the 'bot associate those "sensations" with learned visual cues based on how the object looks.

Robots, however, that have been programmed to see or feel can't use these signals quite as interchangeably. Researchers at the Massachusetts Institute of Technology are working on an AI system that can "learn to see by touching and feel by seeing".

For the better understanding of this sensory gap, the researchers from MIT created a system of Predictive Artificial Intelligence that can create realistic signals from various visual inputs, and also foresee which object and the part is being directly touched from those tactile signals.

The group utilized a KUKA robot arm with an exceptional material sensor called GelSight, planned by another gathering at MIT. The system uses just one simple web camera but has the ability to record almost 200 objects around the arm, such as tools, household products, fabrics, and others.

"By wanting on the scene, our mannequin can think about the sensation of touching a flat floor or a pointy edge", says Yunzhu Li, CSAIL Ph.D. pupil and lead creator on a brand new paper concerning the system.

Andy Murray: Hip operation was 'life changing', ahead of Queen's return
It's where I won my first ATP match, my first title in Britain and on grass, and it's been my most successful tournament overall. Murray also hopes to play in the Wimbledon doubles next month, although he has not said whom he will play with.

From a dataset of more than 3 million visual/tactile-paired images of objects such as tools, household products, and fabrics, the model is able to imagine the feeling of touching to encode details about the objects and the environment. "Bringing these two senses together may empower the robot and cut back the info we would want for tasks involving manipulating and grasping objects". The subsequent step is to construct a more significant knowledge set so the robotic can work in many additional settings.

The CSAIL team then combined the VisGel dataset with what are known as generative adversarial networks, or GANs.

Combined together, CSAIL engineers tried to make their system hallucinate, or create a visual image based exclusively on tactile data. They use a generator and discriminator that compete with each other where the generator aims to create real-looking images to fool the discriminator.

Every time the discriminator "catches" the generator, it has to expose the internal reasoning for the decision, which allows the generator to repeatedly improve itself, researchers said. It is similar to the mechanism of robots, which can easily recognize and grasp objects.

During testing, if the model was fed tactile data on a shoe, for instance, it could produce an image of where the shoe was most likely to be touched.

Recommended News

We are pleased to provide this opportunity to share information, experiences and observations about what's in the news.
Some of the comments may be reprinted elsewhere in the site or in the newspaper.
Thank you for taking the time to offer your thoughts.