Today’s sensor-based gloves cost thousands of dollars and often contain only around 50 sensors. While sensor gloves are used in hands-free aspects of manufacturing and merchandise handling, their performance limitations and cost have meant slow adoption.
But MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone and have also developed a low-cost knitted “scalable tactile glove” (STAG), equipped with about 550 tiny sensors across nearly the entire hand. While it produces very high-resolution data, it’s made from commercially available materials totaling around $10.
Each sensor captures pressure signals as humans interact with objects in various ways. A neural network processes the signals to “learn” a dataset of pressure-signal patterns related to specific objects. Then, the system uses that dataset to classify the objects and predict their weights by feel alone, with no visual input needed.
The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.
In a paper published in Nature, the researchers describe a dataset they compiled using STAG for 26 common objects—including a soda can, scissors, tennis ball, spoon, pen, and mug. Using the dataset, the system predicted the objects’ identities with up to 76% accuracy. The system can also predict the correct weights of most objects within about 60 grams.
The tactile sensing system could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects.
“Humans can identify and handle objects well because we have tactile feedback. As we touch objects, we feel around and realize what they are. Robots don’t have that rich feedback,” says Dr. Subramanian Sundaram, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’ve always wanted robots to do what humans can do, like doing the dishes or other chores. If you want robots to do these things, they must be able to manipulate objects really well.”
Source: Massachusetts Institute Of Technology