Gelsight introduced the new version of its GelSight Mobile probe, the Series 2. The mobile device offers a sleek form-factor that is one-third lighter and less than half the volume of its predecessor, allowing it to scan surfaces in tighter spaces, while maintaining accuracy, speed, and field of view. The technology enables digital tactile sensing with the sensitivity and resolution of human touch. Data captured by the elastomeric tactile sensing platform leverages proprietary software and algorithms to provide detailed, accurate surface characterization. The handheld device can be deployed on production and assembly lines to enable rapid and documented quality assurance decisions.
GelSight has introduced the latest version of its GelSight Mobile probe, the Series 2. This new generation of GelSight’s mobile device offers a sleek form-factor that is one-third lighter and less than half the volume of its predecessor, allowing it to scan surfaces in tighter spaces, while maintaining accuracy, speed, and field of view.
GelSight’s technology enables digital tactile sensing with the sensitivity and resolution of human touch. Data captured by GelSight’s elastomeric tactile sensing platform leverages proprietary software and algorithms to provide detailed, accurate surface characterization that can generate significant gains in productivity both on the production floor and in the field while also reducing the costs associated with manual or tool-based visual inspection. The handheld GelSight Mobile device can be deployed on production and assembly lines to enable rapid and well documented quality assurance decisions. Dimensions of scratches, dents, hits, gaps, offset, hole diameter, and fastener flushness can be measured in high-resolution, on any surface, in seconds.
With the help of AI and a neural network, this wiggling robotic finger digs into substances such as sand to locate and identify buried objects. Robotic hands, fingers, grippers, and their various sensors are usually deployed to function in “free space” for many obvious reasons. But in the real world, it’s often necessary to poke and probe into materials such as sand while looking for objects like buried nails or coins (best case) or explosives (a far-worse case).
Addressing this challenge, a team at the Massachusetts Institute of Technology (MIT) has devised and tested their second-generation robotic finger specifically designed for this class of mission with impressive success: Their “Digger Finger” was able to dig through granular media such as sand and rice and correctly sense the shapes of submerged items it encountered. The objective is an electromechanical finger that replicates the capabilities of a human one, which can find, feel, and “see” (mentally) the shape of a buried object based on sensing and learned experience.
The project consists of two major aspects: the robotic finger that can penetrate into a granular mass to get close enough to illuminate and capture a reflected—albeit highly distorted—pattern, followed by artificial intelligence/neural-net data-extraction and analysis algorithms. At the core of the hardware design is GelSight, a sharp-tipped robot finger developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
What you’ll learn:
- Why and how a complicated robotic-finger design was improved by a research team.
- How this finger is used to dig into bulk materials to locate and “see” small objects.
- How the reflected image of the object is the basis for the data set that determines the actual shape of the object.
- How AI and a neural net were employed to analyze the distorted, reflected object images that were captured.
On the ‘Tech of Sports’ podcast, Rick talks with Felix Breitschädel, PhD, Norwegian Olympic Sports Centre about how the Norwegian Ski Team is utilizing unique tech to give their skiers an edge on the slopes by arming them with detailed data, thanks to GelSight – whose handheld 3D measurement device the Norwegian ski team is using to quite literally, measure snow.
The skiers need to understand how the surface of their ski will interact with the snow, and even though there were other measurement tools they could use to measure the topography of the snow, it had to be taken out of the natural environment and into the lab, which impacted results. With the handheld device, GelSight mobile, Felix and his team can investigate the surface of the snow right out on the slopes or rink to identify the snow grains and take pictures of the snow’s surface for future reference. It helps also inform R&D decisions to develop new grinds for equipment for upcoming competitions.
The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on health, money, cities and more. This month is Artificial Intelligence, online starting July 2 and in the paper on July 9.
Even the smartest computers cannot fully understand the world without the ability to see, hear, smell, taste or touch. But in the decadeslong race to make software think like humans—and beat them at “Jeopardy!”—the idea of endowing a machine with humanlike senses seemed far-fetched. Not anymore, engineers and researchers say.
Capabilities powered by artificial intelligence, like image or voice recognition, are already commonplace features of smartphones and virtual assistants. Now, customized sensors, machine learning and neural networks—a subset of AI that mimics the way our brains work—are pushing digital senses to the next level, creating robots that can tell when a package is fragile, sniff out an overheated radiator or identify phony Chardonnay.
Hype around AI is running high, and much of the research is in early stages. Here, we look at 10 working models and prototypes of AI with sensory abilities.