How Computers with Humanlike Sense Will Changes Our Lives

The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on healthmoneycities and more. This month is Artificial Intelligence, online starting July 2 and in the paper on July 9.

Even the smartest computers cannot fully understand the world without the ability to see, hear, smell, taste or touch. But in the decadeslong race to make software think like humans—and beat them at “Jeopardy!”—the idea of endowing a machine with humanlike senses seemed far-fetched. Not anymore, engineers and researchers say.

Capabilities powered by artificial intelligence, like image or voice recognition, are already commonplace features of smartphones and virtual assistants. Now, customized sensors, machine learning and neural networks—a subset of AI that mimics the way our brains work—are pushing digital senses to the next level, creating robots that can tell when a package is fragile, sniff out an overheated radiator or identify phony Chardonnay.

Hype around AI is running high, and much of the research is in early stages. Here, we look at 10 working models and prototypes of AI with sensory abilities.

Click here to read the full article in the Wall Street Journal (subscription may be required).

Enabling 3D Measurement for Shot-Peened Surface Characterization

GelSight’s Chief Product Officer and co-founder, Dr. Kimo Johnson, spoke with the team at Shot Peener Magazine about how our technology can help provide more accurate assessments of shot peening coverage, rate and other process parameters.

Click here to learn more in the full feature on page 36 of their Summer 2021 issue.

Slender robotic finger senses buried items

Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

“So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” says Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

“We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” says Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” says Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council.

Original article published on MIT News: https://news.mit.edu/2021/robotic-finger-buried-underground-0526

‘Digging’ robot can locate objects concealed by granular media

A researcher from Massachusetts Institute of Technology (MIT) has developed a robot that can search for objects buried beneath sand and other granular media.

The so-called Digger Finger, which features a GelSight sensor — a vision-based tactile sensor — can reportedly penetrate granular media to locate concealed objects.

According to the research, the prototype penetrates the granular media via mechanical vibrations that cause the granular media to take on fluid-like properties, whereas current tactile sensors used for similar functions are typically jammed by the granular media. Meanwhile, high resolution tactile sensing enables the identification of the objects concealed by the granular media.

The prototype is being considered for applications including deep sea exploration and mining. The robot could also potentially be used to conduct explosive ordnance disposal and improvised explosive device (IED) detection functions, and buried cable retrieval operations.

The Digger Finger is detailed in a paper titled “Digger Finger: GelSight Tactile Sensor for Object Identification Inside Granular Media,” which appears in the journal arXiv.To contact the author of this article, email donlon@globalspec.com

Original article published on Engineering360: https://insights.globalspec.com/article/16051/digging-robot-can-locate-objects-concealed-by-granular-media

Researcher turns £1 make-up sponge into sensor with potential to improve medical care

A Liverpool Hope University student has found a revolutionary way to improve medical care using a simple silicone make-up sponge – bought from a high street chemist.

Alexander Co Abad discovered the £1 beauty product could form the main component of a sensor designed to be as sensitive as human skin when it comes to providing ‘touch’ feedback.

He also says there’s the potential for it to be used in a range of medical procedures – from helping robotic arms to grip instruments during surgery, to even detecting tumours.

Alexander’s work has been centred on advancing something called the ‘GelSight’ sensor, first created back in 2009 by scientists at America’s Massachusetts Institute of Technology (MIT).

Alexander is from Manila and is currently working towards a PhD at Hope through a scholarship from the Philippine government. He had been “fascinated” by the product since it was introduced to him by his research supervisor, but wanted to make it even more accessible to medical professionals.

To do that he created a homemade, low-cost version of the sensor which utilises an UltraViolet (UV) torch, an LED light and a webcam to provide a highly detailed visual 3D ‘map’ of any surface it touches.

He says the gadget can spot details not detectable with the human eye, as well as picking up slight vibrations and variations in pressure.

Alexander, who is studying computer science and informatics while specialising in robotics, said: ” The idea of the sensor is to be as sensitive as human skin when it comes to ‘touch’ – meaning it can detect the smallest details on the surface of a coin, for example, or even the tiniest vibrations.

“When attached to a robot’s finger, it is able to sense your pulse. And it can even differentiate between rough and smooth surfaces.

“The overall concept is to enable a robot to feel and sense like a human would, which gives it many advantages for medical work.

“Another application it can be used for is to detect things under the skin, so potentially in the future it could be used to check for tumours or to detect possible lumps in the breast.”

Alexander believes that another key application is in helping a robot arm to grip. He said: “Right now there’s a problem in the robotics community when it comes to gripping.

“Sometimes the object a robotic arm touches is so smooth there’s a tendency for it to slide when gripped. We need a sensor to measure this slip so that we can increase the gripping force, and that’s where this technology comes into play.”

Alexander’s dream is for the sensor to be used during surgery. He explained that endoscopic procedures, where a doctor can examine internal organs using a camera, is another example where the sensor could be effective as it could give doctors extra feedback about activity within the human body.

The student began his studies at Hope back in 2019, having come to Liverpool from De La Salle University in Manila, and began his research to create a low cost version of a sensor that replicates the qualities of human skin.

He said: “I thought the best place to start would be shops that sell cosmetics.

“I first noticed a pink sponge and realised that, if I took off the layer of coloured sponge, I had the silicone I needed for just £1 and that’s the basic component for the technology.

“I then put a camera on one side so you can see and record the object touching the silicone, and managed to make something for a very low cost.”

As a teacher in the electronics and communications engineering department at De La Salle University, Alexander is eventually planning to take his newfound knowledge back to the Philippines while continuing his research in his home city once he has graduated from Hope.

Now in the final year of his studies in Liverpool, Alexander is continuing his research and looking for new ways to improve his device, including the addition of temperature sensing capabilities.

What Robots Need to Become Better Helpers

Both the government and private sector continue to work on building more functional robots to accomplish various tasks, especially ones that aren’t suited or safe for humans. For example, NASA’s Mars Perseverance Mission, which is fully robotic, is scheduled to make planetfall on Mars next week. In addition to the Pathfinder robot, which is pretty well-known at this point, it will also be carrying the Ingenuity Mars Helicopter, a robotic drone specially designed to fly around and explore within the thin atmosphere of Mars.

But mobility is only one aspect of creating the advanced robots and robotic tools of the future. For the most part, we have the locomotion part down. We already have thousands of flying drones and robots, plus specialized models that can climb up the side of cliffs or work completely in or under the water.

The problem is that once we get those robots into inaccessible or inhospitable places, they need to be able to actually manipulate their environment in the same way that a human would. And for that, they pretty much need hands, ideally ones with fingers and maybe a thumb. I recently talked with a researcher at the Army Research Laboratory who told me that the ability to manipulate physical space, through either some type of actuator or robotic hand, would be an important key to successful robot deployments in the future.

Last week, we got a first look at what that might look like. Boston Dynamics, one of the most advanced robot-making companies in the world, upgraded their well-known dog-like robot model named Spot with a very functional robotic hand. Previously, Spot robots were able to traverse rough terrain and even stairs but were stymied by things like a closed door. The company released a fascinating video showing Spot making good use of its new appendage. The hand is mounted on the end of an articulated arm in the center of the robot which lets it extend in almost any direction.

“Since first launching Spot, we have worked closely with our customers to identify how the robot could best support their mission-critical applications,” said Robert Playter, CEO of Boston Dynamics. “Our customers want reliable data collection in remote, hazardous and dynamic worksites. We developed the new Spot products with these needs in mind, and with the goal of making it easy to regularly and remotely perform critical inspections, improving safety and operations.”

The video shows Spot performing some very fine manipulations with its hand, including planting a sapling (after first digging a hole for it) without snapping the delicate young tree in half. It also does some other tasks in the video including collecting a bundle of cloth out in the snowy woods, opening an office door and shutting off a valve to stop a leaking pipe. All of that is impressive, but one wonders just how delicate Spot, or any robot, can really be without a real sense of touch.

It’s a question that Professor Ted Adelson of the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory has been working on for many years. He has designed a way for robots to simulate a sense of touch, which he believes will eventually enable them to be as precise as a human hand.

“In order for robots to do good manipulation, they need good fingers,” he said. “We’re trying to make fingers that can match the capabilities of human fingers.”

The technology that Adelson and the team at MIT developed is called GelSight, and it involves deploying a soft covering over a robotic hand. Tiny cameras in the material monitor the surrounding soft “skin” and record how much it deforms as the hand grips objects. That data is then fed into a computer model that helps the robot “see” how much pressure is needed to grasp an object without squeezing it too hard. The fingers can also be used to measure force, shear and slip.

He talked about this new technology and its importance in an interview posted on YouTube last month. From his explanation, it seems like the next step is making sure that robots can use the collected touch data intelligently with their new hands so they can accomplish a variety of tasks requiring everything from brute strength to fine motor skills.

Of course, the other thing that our robots need to help usher in the future is the ability to perform tasks independently without human interaction. Technically, the definition of a robot is a device that can carry out complex actions automatically. So having a human piloting a device’s every move means that it’s not technically a robot at all. But we are working on that too, with great strides in artificial intelligence and machine learning being made every day. 

It’s just that when working on something as complex and powerful as creating artificial intelligence, it’s easy to forget little things, like the power and the necessity of touch. Hopefully, we are starting to see the tip of the iceberg now in that new area of artificial senses, with future robots literally getting a helping hand from the latest research.

Original article published on NextGov: https://www.nextgov.com/ideas/2021/02/what-robots-need-become-better-helpers/171962/

A technique that allows robots to estimate the pose of objects by touching them

Humans are able to find objects in their surroundings and detect some of their properties simply by touching them. While this skill is particularly valuable for blind individuals, it can also help people with no visual impairments to complete simple tasks, such as locating and grabbing an object inside a bag or pocket. 

Researchers at Massachusetts Institute of Technology (MIT) have recently carried out a study aimed at replicating this human capability in robots, allowing them to understand where objects are located simply by touching them. Their paper, pre-published on arXiv, highlights the advantages of developing robots that can interact with their surrounding environment through touch rather than merely through vision and audio processing.

“The goal of our work was to demonstrate that with high-resolution tactile sensing it is possible to accurately localize known objects even from the first contact,” Maria Bauza, one of the researchers who carried out the study, told TechXplore. “Our approach makes an important leap compared to previous works on tactile localization, as we do not rely on any other external sensing modality (like vision) or previously collected tactile data related to the manipulated objects. Instead, our technique, which was trained directly in simulation, can localize known objects from the first touch which is paramount in real robotic applications where real data collection is expensive or simply unfeasible.”

As it is trained in simulations, the technique devised by Bauza and her colleagues does not require extensive data collection. The researchers initially developed a framework that simulates contacts between a given object and a tactile sensor, thus assuming that a robot will have access to data about the object it is interacting with (e.g., its 3-D shape, properties, etc.). These contacts are represented as depth images, which show the extent of an object’s penetration into the tactile sensor.

Subsequently, Bauza and her colleagues used state-of-the-art machine-learning techniques for computer vision and representation learning to match real tactile observations gathered by a robot with the set of contacts generated in simulation. Every contact in the simulation dataset is weighed depending on the likelihood that it matches the real or observed contact, which ultimately allows the framework to attain the probability distribution over possible object poses.

The tactile sensor used by the researchers. Credit: Bauza et al.

“Our method encodes contact, represented as depth images, into an embedded space, which greatly simplifies computational cost allowing real-time execution,” Bauza said. “As it can generate meaningful pose distributions, it can be easily combined with additional perception systems. In our work, we exemplify this in a multi-contact scenario where several tactile sensors simultaneously touch an object, and we must incorporate all these observations into the object’s pose estimation.”

Essentially, the method devised by this team of researchers can simulate contact information simply based on an object’s 3-D shape. As a result, it does not require any previous tactile data gathered while closely examining the object. This allows the technique to generate pose estimates for an object from the first time it is touched by a robot’s tactile sensors.

“We realized that tactile sensing can be extremely discriminative and produce highly accurate pose estimations,” Bauza said. “While vision will sometimes suffer from occlusions, tactile sensing does not. As a result, if a robot contacts a part of an object that is very unique, i.e., no other touch on the object would look similar to it, then our algorithm can easily identify the contact and thus the object’s pose.”

As many objects have non-unique regions (i.e., the way in which they are positioned can result in very similar contacts), the method developed by Bauza and her colleagues predicts pose distributions, rather than single pose estimates. This particular feature is in stark contrast with previously developed approaches for object pose estimation, which tend to only gather single pose estimates. Moreover, the distributions predicted by the MIT team’s framework can be directly merged with external information to further reduce uncertainty about an object’s pose.

“Notably, we also observed that combining several contacts simultaneously, as it happens when using several fingers to contact an object, rapidly decreases any uncertainty on an objects’ pose,” Bauza said. “This validates our intuition that adding contacts on an object constrains its pose and eases estimation.”

In order to assist humans in their daily activities, robots should be able to complete manipulation tasks with high precision, reliability and accuracy. As manipulating objects directly implies touching them, developing effective techniques to enable tactile sensing in robots is of key importance.

“The ability to sense touch has recently received great interest from industry, and our work achieves this via a combination of three factors: (1) a high-resolution but inexpensive sensing technique based on using small cameras to capture the deformation of a touch surface (e.g., GelSight sensing); (2) recent compact integration of this sensing technique into robot fingers (e.g., GelSlim fingers); (3) and a computational framework based on deep-learning to process effectively the high-resolution tactile images for tactile localization of known parts (e.g., this work),” Alberto Rodriguez, another researcher involved in the study, told TechXplore. “This type of technology is becoming mature and the industry is seeing the value for automating tasks that require precision such as in assembly automation.”

The technique devised by this team of researchers allows robots to estimate the pose of objects they are manipulating in real-time, with high levels of accuracy. This gives a robot the chance to make more accurate predictions about the effects of its movements or actions, which could enhance its performance in manipulation tasks.

To work, the method created by Bauza and her colleagues requires some information about the shape of the object that a robot is manipulating. Therefore, it may prove particularly valuable for implementations in industrial settings, where manufacturers assemble items based on a clear model of their shapes.

In their future work, the researchers plan to extend their framework so that it also incorporates visual information about objects. Ideally, they would like to turn their technique into a visuo-tactile sensing system that can estimate the pose of objects with even greater accuracy.

“Another ongoing work of ours deeply related to this approach aims at exploring the use of tactile perception for complex manipulation tasks,” Bauza said. “In particular, we are learning models that allow a robot to perform accurate pick-and-place operations. The goal is to find object manipulations that not only aim at stable grasps but also aid perception. By using our approach, we can also target grasps that result in discriminative contacts which will improve tactile localization.”

Original article published on TechXplore: https://techxplore.com/news/2021-01-technique-robots-pose.html

MIT’s Two-finger Model Could Perfect the Robot Hand

MIT continues to advance the utility of robots, and researchers at the college’s Computer Science and AI Lab have announced they’ve made some strides in addressing one of many awkward challenges: robot hands

That’s right – robot hands are getting a creepy makeover, but one that should make them far more functional.

A spokesperson for MIT recently told Engadget that the manipulation of thin, flexible objects has been a nearly impossible feat for a robot, which is why the traditional approach has used mechanical fixtures that move slowly and deliberately. 

But MIT’s newest gripper uses two “fingers” that more resembles a human grip. Tactile sensors called “GelSight” use a rubber surround and cameras on a movable arm. One controller monitors the grip and another the hand’s pose, and their combined efforts are said to improve the way the robot hand is able to maintain a handle on wiring without it slipping through its fingers.

The benefits of the technology could hit consumers and businesses alike. Engadget says the grippers could find a use for home chores like folding laundry but also for “technical purposes” like separating or shaping wires.

Original article published on ThomasNet: https://www.thomasnet.com/insights/mit-s-two-finger-model-could-perfect-the-robot-hand/

Gripper Handles Freely Moving Cables

For humans, it can be challenging to manipulate thin, flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.

Standard approaches have used a series of slow and incremental deformations as well as mechanical fixtures to get the job done. Researchers have developed a system that uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

The team first built a two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based GelSight sensors built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.

The second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength while the other adjusts the gripper pose to keep the cable within the gripper.

When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable hand-over-hand (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.

The robot performed an action that humans routinely do when plugging earbuds into a cellphone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.

Cable-following is challenging for two reasons. First, it requires controlling the grasp force (to enable smooth sliding) and the grasp pose (to prevent the cable from falling from the gripper’s fingers). This information is hard to capture from conventional vision systems during continuous manipulation because it’s usually occluded, expensive to interpret, and sometimes inaccurate. Also, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile sensors. The gripper’s joints are also flexible, protecting them from potential impact. The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.

When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others; for example, an open-loop controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many re-grasps to finish the task.

The team observed that it was difficult to pull the cable back when it reached the edge of the finger because of the convex surface of the GelSight sensor. They hope to improve the finger-sensor shape to enhance the overall performance. They also plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles and eventually explore autonomous cable manipulation tasks in the automotive industry.

Original article published on Tech Briefs: https://www.techbriefs.com/component/content/article/tb/supplements/md/briefs/37388

Elastomeric Sensor Visualizes and Measures 3D Topography

GelSight, the developer of industrial 3D imaging solutions for the aerospace, automotive, and other electronics industries, has announced that it has raised $10 million in new funding. The funding will be used to accelerate GelSight’s growth as adoption of its unique elastomeric-based imaging system gains deeper traction with major aerospace customers around the globe.

GelSight Mobile, the company’s flagship device, is a handheld instrument that precisely and repeatably visualizes and measures the 3D topography of any surface in seconds, revealing microscopic structures that are impossible to precisely assess in real time. GelSight’s proprietary elastomeric sensor conforms to the surface topography of any material, including metals, composites, and glass to reveal the features of the surface regardless of ambient lighting conditions or material reflectivity. The GelSight Mobile system gives instant visual feedback along with position, depth, and other surface measurements with high spatial resolution down to the micron level. The 3D depth map is calculated from images of the surface, providing position, depth, and other derived surface measurements at a high resolution.

Gelsight Mobile uses a 5 MP, 60 FPS camera and has a sensitivity depth of <1 Micron and a capture speed of 100ms. The standard device has a field of view of 8.4mm x 7.1 mm while the extended device offers a 16.9mm x 14.1mm viewing field.

“GelSight’s handheld systems enable dramatic improvements to conventional inspection and quality processes, and we are seeing broader use among the world’s leading aerospace and automobile manufacturers,” said Kimo Johnson, Chief Executive Officer of GelSight. “We are excited to have the backing of Anzu Partners to help us take our commercialization and product development efforts to the next level, expanding into other major aircraft manufacturers and aircraft maintenance operations here in the United States and abroad.”

Rolls-Royce, an early adopter of the technology, is actively deploying GelSight Mobile, validating critical aerospace applications.

“We have integrated GelSight Mobile into a number of our global value streams where the technology has proven its value very quickly,” said Alistair Donaldson, Transformation Executive – Head of Innovation and New Product Design at Rolls-Royce. “We are pleased to be supporting GelSight in their growth journey and look forward to working together on the future roadmap of GelSight products and services.”

Original article published on Metrology News: https://metrology.news/elastomeric-sensor-visualizes-and-measures-3d-topography/