In recent times, researchers globally have been attempting to build sensors that could replicate humans’ sense of touch in robots and improve their manipulation skills. While a few of these sensors achieved remarkable outcomes, most existing solutions have small sensitive fields or can only gather pictures with low-resolutions.
A group of engineers at UC Berkeley recently built a brand new multi-directional tactile sensor, known-as OmniTact, that tackles some of the limitations of previously developed sensors. OmniTact, featured in a paper pre-published on arXiv and set to be defined at ICRA 2020, acts as an artificial finger that allows robots to sense the properties of objects it’s holding or manipulating.
OmniTact, the sensor built by Ebert and his colleagues, is an adaptation of GelSight, a tactile sensor developed by researchers at MIT and UC Berkeley. GelSight can produce detailed 3D maps of an object’s surface and detect some of its traits.
In contrast with GelSight, which signifies that all of its sides have sensing capabilities. As well as, it can provide high-resolution readings, is extremely compact and has a curved shape. When built-in into a gripper or robotic hand, the sensor acts as a sensitive human-made ‘finger,” allowing the robot to govern and sense a wide range of objects varying in shape and sizes.
OmniTact was developed by embedding multiple micro-cameras into artificial skin from silicone gel. The cameras detect multi-directional deformations of the gel-based pores and skin, producing a precious sign that may then be analyzed by computer vision and picture processing techniques to deduce details about the objects that a robot is manipulating.