What Robots Need to Become Better Helpers

Both the government and private sector continue to work on building more functional robots to accomplish various tasks, especially ones that aren’t suited or safe for humans. For example, NASA’s Mars Perseverance Mission, which is fully robotic, is scheduled to make planetfall on Mars next week. In addition to the Pathfinder robot, which is pretty well-known at this point, it will also be carrying the Ingenuity Mars Helicopter, a robotic drone specially designed to fly around and explore within the thin atmosphere of Mars.

But mobility is only one aspect of creating the advanced robots and robotic tools of the future. For the most part, we have the locomotion part down. We already have thousands of flying drones and robots, plus specialized models that can climb up the side of cliffs or work completely in or under the water.

The problem is that once we get those robots into inaccessible or inhospitable places, they need to be able to actually manipulate their environment in the same way that a human would. And for that, they pretty much need hands, ideally ones with fingers and maybe a thumb. I recently talked with a researcher at the Army Research Laboratory who told me that the ability to manipulate physical space, through either some type of actuator or robotic hand, would be an important key to successful robot deployments in the future.

Last week, we got a first look at what that might look like. Boston Dynamics, one of the most advanced robot-making companies in the world, upgraded their well-known dog-like robot model named Spot with a very functional robotic hand. Previously, Spot robots were able to traverse rough terrain and even stairs but were stymied by things like a closed door. The company released a fascinating video showing Spot making good use of its new appendage. The hand is mounted on the end of an articulated arm in the center of the robot which lets it extend in almost any direction.

“Since first launching Spot, we have worked closely with our customers to identify how the robot could best support their mission-critical applications,” said Robert Playter, CEO of Boston Dynamics. “Our customers want reliable data collection in remote, hazardous and dynamic worksites. We developed the new Spot products with these needs in mind, and with the goal of making it easy to regularly and remotely perform critical inspections, improving safety and operations.”

The video shows Spot performing some very fine manipulations with its hand, including planting a sapling (after first digging a hole for it) without snapping the delicate young tree in half. It also does some other tasks in the video including collecting a bundle of cloth out in the snowy woods, opening an office door and shutting off a valve to stop a leaking pipe. All of that is impressive, but one wonders just how delicate Spot, or any robot, can really be without a real sense of touch.

It’s a question that Professor Ted Adelson of the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory has been working on for many years. He has designed a way for robots to simulate a sense of touch, which he believes will eventually enable them to be as precise as a human hand.

“In order for robots to do good manipulation, they need good fingers,” he said. “We’re trying to make fingers that can match the capabilities of human fingers.”

The technology that Adelson and the team at MIT developed is called GelSight, and it involves deploying a soft covering over a robotic hand. Tiny cameras in the material monitor the surrounding soft “skin” and record how much it deforms as the hand grips objects. That data is then fed into a computer model that helps the robot “see” how much pressure is needed to grasp an object without squeezing it too hard. The fingers can also be used to measure force, shear and slip.

He talked about this new technology and its importance in an interview posted on YouTube last month. From his explanation, it seems like the next step is making sure that robots can use the collected touch data intelligently with their new hands so they can accomplish a variety of tasks requiring everything from brute strength to fine motor skills.

Of course, the other thing that our robots need to help usher in the future is the ability to perform tasks independently without human interaction. Technically, the definition of a robot is a device that can carry out complex actions automatically. So having a human piloting a device’s every move means that it’s not technically a robot at all. But we are working on that too, with great strides in artificial intelligence and machine learning being made every day. 

It’s just that when working on something as complex and powerful as creating artificial intelligence, it’s easy to forget little things, like the power and the necessity of touch. Hopefully, we are starting to see the tip of the iceberg now in that new area of artificial senses, with future robots literally getting a helping hand from the latest research.

Original article published on NextGov: https://www.nextgov.com/ideas/2021/02/what-robots-need-become-better-helpers/171962/

A technique that allows robots to estimate the pose of objects by touching them

Humans are able to find objects in their surroundings and detect some of their properties simply by touching them. While this skill is particularly valuable for blind individuals, it can also help people with no visual impairments to complete simple tasks, such as locating and grabbing an object inside a bag or pocket. 

Researchers at Massachusetts Institute of Technology (MIT) have recently carried out a study aimed at replicating this human capability in robots, allowing them to understand where objects are located simply by touching them. Their paper, pre-published on arXiv, highlights the advantages of developing robots that can interact with their surrounding environment through touch rather than merely through vision and audio processing.

“The goal of our work was to demonstrate that with high-resolution tactile sensing it is possible to accurately localize known objects even from the first contact,” Maria Bauza, one of the researchers who carried out the study, told TechXplore. “Our approach makes an important leap compared to previous works on tactile localization, as we do not rely on any other external sensing modality (like vision) or previously collected tactile data related to the manipulated objects. Instead, our technique, which was trained directly in simulation, can localize known objects from the first touch which is paramount in real robotic applications where real data collection is expensive or simply unfeasible.”

As it is trained in simulations, the technique devised by Bauza and her colleagues does not require extensive data collection. The researchers initially developed a framework that simulates contacts between a given object and a tactile sensor, thus assuming that a robot will have access to data about the object it is interacting with (e.g., its 3-D shape, properties, etc.). These contacts are represented as depth images, which show the extent of an object’s penetration into the tactile sensor.

Subsequently, Bauza and her colleagues used state-of-the-art machine-learning techniques for computer vision and representation learning to match real tactile observations gathered by a robot with the set of contacts generated in simulation. Every contact in the simulation dataset is weighed depending on the likelihood that it matches the real or observed contact, which ultimately allows the framework to attain the probability distribution over possible object poses.

The tactile sensor used by the researchers. Credit: Bauza et al.

“Our method encodes contact, represented as depth images, into an embedded space, which greatly simplifies computational cost allowing real-time execution,” Bauza said. “As it can generate meaningful pose distributions, it can be easily combined with additional perception systems. In our work, we exemplify this in a multi-contact scenario where several tactile sensors simultaneously touch an object, and we must incorporate all these observations into the object’s pose estimation.”

Essentially, the method devised by this team of researchers can simulate contact information simply based on an object’s 3-D shape. As a result, it does not require any previous tactile data gathered while closely examining the object. This allows the technique to generate pose estimates for an object from the first time it is touched by a robot’s tactile sensors.

“We realized that tactile sensing can be extremely discriminative and produce highly accurate pose estimations,” Bauza said. “While vision will sometimes suffer from occlusions, tactile sensing does not. As a result, if a robot contacts a part of an object that is very unique, i.e., no other touch on the object would look similar to it, then our algorithm can easily identify the contact and thus the object’s pose.”

As many objects have non-unique regions (i.e., the way in which they are positioned can result in very similar contacts), the method developed by Bauza and her colleagues predicts pose distributions, rather than single pose estimates. This particular feature is in stark contrast with previously developed approaches for object pose estimation, which tend to only gather single pose estimates. Moreover, the distributions predicted by the MIT team’s framework can be directly merged with external information to further reduce uncertainty about an object’s pose.

“Notably, we also observed that combining several contacts simultaneously, as it happens when using several fingers to contact an object, rapidly decreases any uncertainty on an objects’ pose,” Bauza said. “This validates our intuition that adding contacts on an object constrains its pose and eases estimation.”

In order to assist humans in their daily activities, robots should be able to complete manipulation tasks with high precision, reliability and accuracy. As manipulating objects directly implies touching them, developing effective techniques to enable tactile sensing in robots is of key importance.

“The ability to sense touch has recently received great interest from industry, and our work achieves this via a combination of three factors: (1) a high-resolution but inexpensive sensing technique based on using small cameras to capture the deformation of a touch surface (e.g., GelSight sensing); (2) recent compact integration of this sensing technique into robot fingers (e.g., GelSlim fingers); (3) and a computational framework based on deep-learning to process effectively the high-resolution tactile images for tactile localization of known parts (e.g., this work),” Alberto Rodriguez, another researcher involved in the study, told TechXplore. “This type of technology is becoming mature and the industry is seeing the value for automating tasks that require precision such as in assembly automation.”

The technique devised by this team of researchers allows robots to estimate the pose of objects they are manipulating in real-time, with high levels of accuracy. This gives a robot the chance to make more accurate predictions about the effects of its movements or actions, which could enhance its performance in manipulation tasks.

To work, the method created by Bauza and her colleagues requires some information about the shape of the object that a robot is manipulating. Therefore, it may prove particularly valuable for implementations in industrial settings, where manufacturers assemble items based on a clear model of their shapes.

In their future work, the researchers plan to extend their framework so that it also incorporates visual information about objects. Ideally, they would like to turn their technique into a visuo-tactile sensing system that can estimate the pose of objects with even greater accuracy.

“Another ongoing work of ours deeply related to this approach aims at exploring the use of tactile perception for complex manipulation tasks,” Bauza said. “In particular, we are learning models that allow a robot to perform accurate pick-and-place operations. The goal is to find object manipulations that not only aim at stable grasps but also aid perception. By using our approach, we can also target grasps that result in discriminative contacts which will improve tactile localization.”

Original article published on TechXplore: https://techxplore.com/news/2021-01-technique-robots-pose.html

MIT’s Two-finger Model Could Perfect the Robot Hand

MIT continues to advance the utility of robots, and researchers at the college’s Computer Science and AI Lab have announced they’ve made some strides in addressing one of many awkward challenges: robot hands

That’s right – robot hands are getting a creepy makeover, but one that should make them far more functional.

A spokesperson for MIT recently told Engadget that the manipulation of thin, flexible objects has been a nearly impossible feat for a robot, which is why the traditional approach has used mechanical fixtures that move slowly and deliberately. 

But MIT’s newest gripper uses two “fingers” that more resembles a human grip. Tactile sensors called “GelSight” use a rubber surround and cameras on a movable arm. One controller monitors the grip and another the hand’s pose, and their combined efforts are said to improve the way the robot hand is able to maintain a handle on wiring without it slipping through its fingers.

The benefits of the technology could hit consumers and businesses alike. Engadget says the grippers could find a use for home chores like folding laundry but also for “technical purposes” like separating or shaping wires.

Original article published on ThomasNet: https://www.thomasnet.com/insights/mit-s-two-finger-model-could-perfect-the-robot-hand/

Gripper Handles Freely Moving Cables

For humans, it can be challenging to manipulate thin, flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.

Standard approaches have used a series of slow and incremental deformations as well as mechanical fixtures to get the job done. Researchers have developed a system that uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

The team first built a two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based GelSight sensors built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.

The second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength while the other adjusts the gripper pose to keep the cable within the gripper.

When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable hand-over-hand (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.

The robot performed an action that humans routinely do when plugging earbuds into a cellphone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.

Cable-following is challenging for two reasons. First, it requires controlling the grasp force (to enable smooth sliding) and the grasp pose (to prevent the cable from falling from the gripper’s fingers). This information is hard to capture from conventional vision systems during continuous manipulation because it’s usually occluded, expensive to interpret, and sometimes inaccurate. Also, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile sensors. The gripper’s joints are also flexible, protecting them from potential impact. The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.

When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others; for example, an open-loop controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many re-grasps to finish the task.

The team observed that it was difficult to pull the cable back when it reached the edge of the finger because of the convex surface of the GelSight sensor. They hope to improve the finger-sensor shape to enhance the overall performance. They also plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles and eventually explore autonomous cable manipulation tasks in the automotive industry.

Original article published on Tech Briefs: https://www.techbriefs.com/component/content/article/tb/supplements/md/briefs/37388

Elastomeric Sensor Visualizes and Measures 3D Topography

GelSight, the developer of industrial 3D imaging solutions for the aerospace, automotive, and other electronics industries, has announced that it has raised $10 million in new funding. The funding will be used to accelerate GelSight’s growth as adoption of its unique elastomeric-based imaging system gains deeper traction with major aerospace customers around the globe.

GelSight Mobile, the company’s flagship device, is a handheld instrument that precisely and repeatably visualizes and measures the 3D topography of any surface in seconds, revealing microscopic structures that are impossible to precisely assess in real time. GelSight’s proprietary elastomeric sensor conforms to the surface topography of any material, including metals, composites, and glass to reveal the features of the surface regardless of ambient lighting conditions or material reflectivity. The GelSight Mobile system gives instant visual feedback along with position, depth, and other surface measurements with high spatial resolution down to the micron level. The 3D depth map is calculated from images of the surface, providing position, depth, and other derived surface measurements at a high resolution.

Gelsight Mobile uses a 5 MP, 60 FPS camera and has a sensitivity depth of <1 Micron and a capture speed of 100ms. The standard device has a field of view of 8.4mm x 7.1 mm while the extended device offers a 16.9mm x 14.1mm viewing field.

“GelSight’s handheld systems enable dramatic improvements to conventional inspection and quality processes, and we are seeing broader use among the world’s leading aerospace and automobile manufacturers,” said Kimo Johnson, Chief Executive Officer of GelSight. “We are excited to have the backing of Anzu Partners to help us take our commercialization and product development efforts to the next level, expanding into other major aircraft manufacturers and aircraft maintenance operations here in the United States and abroad.”

Rolls-Royce, an early adopter of the technology, is actively deploying GelSight Mobile, validating critical aerospace applications.

“We have integrated GelSight Mobile into a number of our global value streams where the technology has proven its value very quickly,” said Alistair Donaldson, Transformation Executive – Head of Innovation and New Product Design at Rolls-Royce. “We are pleased to be supporting GelSight in their growth journey and look forward to working together on the future roadmap of GelSight products and services.”

Original article published on Metrology News: https://metrology.news/elastomeric-sensor-visualizes-and-measures-3d-topography/

Forbes: Digitizing The Finger: Startup Gelsight Hones Measurement System That Could Speed Up Aerospace Work

The human finger is incredibly sensitive, with a dense array of receptors in the tip that allow us to feel features as small as nanometers. When Kimo Johnson visited aerospace companies that machine metal parts, he learned that many were relying on experienced workers to conduct a “fingernail test” to quickly judge whether a surface defect was just a small scratch or deep enough to potentially compromise the part.

Johnson’s Boston-area startup, Gelsight, aims to provide a more accurate but equally fast alternative with a handheld probe it’s developed. The tip contains a clear gel pad that, like our fleshy fingertips, can conform around an object. Using computer vision techniques, the system “turns touch into an image,” says Johnson, allowing it to map surface features in 3D and measure them down to the single-digit micron level. 

The company is starting to get traction: A number of large aerospace companies are moving toward deploying the tool, Johnson says, including airplane engine maker Rolls-Royce.

Now Gelsight has raised $10 million in a B round that takes its total funding to $11.8 million. It’s expecting revenue to top $4 million this year on sales of over 100 systems and 2,500 gel cartridges.

While the 42-year-old Johnson hopes aerospace will be Gelsight’s breakthrough application, in the nine years since he co-founded the company based on his postdoctoral work at MIT, benchtop versions of its technology have been used for a wide array of tasks, including by Harvard researchers to map fish scales and cosmetics developers to measure how well a product filled in wrinkles.

Rolls-Royce is distributing the handheld device to service reps in its maintenance shops to assess engine parts for damage. Until now, to get an accurate measurement of scratches and dents in sometimes hard to reach places, Rolls and other companies have used a method akin to how dentists make molds of teeth, pressing a wet composite material against the spot that hardens, creating a negative of the defect that is sent off to a lab for cross-sectioning and measurement. It can take as much as 24 hours to get a result.

“We haven’t found another product that has the capability of Gelsight,” says Alistair Donaldson, a Rolls-Royce executive focused on innovation who’s helped the startup hone its technology for practical use. “I could put it in the hands of a child and he could take an accurate measurement.”   

The device is based on work by MIT professor Edward Adelson, a specialist in computer vision who became fascinated by how the sense of touch worked while raising his children. After learning from colleagues that no one yet had developed an artificial tactile sensor with anywhere near the ability of human skin, Adelson set out to do so, leveraging his vision expertise.

“The concept was what would it look like if you had a camera inside your skin and you could see when something’s poking into your skin,” says Johnson, who joined Adelson’s lab as a post-doctoral fellow in 2008 after earning a Ph.D. in computer science from Dartmouth.

Adelson developed a clear pad made of a rubbery thermoplastic elastomer that was coated on one face with a metallic paint. When an object is pressed into the painted side, it takes on the object’s shape and presents a surface that a camera can easily capture a high-fidelity image of through the gel, without distortions from variations of reflective or transparent elements in the object. Johnson built the hardware and wrote algorithms to accurately map surfaces using image processing and machine learning techniques.

Along with a 2011 paper describing the system, Johnson published YouTube videos that caught wider attention, showing renderings of the complex geometry of an Oreo cookie and the raised structure of a printed letter on a $20 bill. He says that led the team to receive thousands of messages from companies and engineers asking if it could be used to measure everything from features of human skin to the quality of abrasives, and a visit by two Boeing engineers with a suitcase full of parts.

“That’s what prompted me to start this company,” Johnson says.    

He launched Gelsight in 2011 with CTO Janos Rohaly, a former lecturer at MIT who cofounded a dental 3D imaging startup that was acquired by 3M, and former CEO Bill Yost. (Johnson has been CEO since 2016.) They made a series of one-off benchtop machines for corporate R&D departments with gel pads optimized for specific applications, such as a toothpaste maker that squished globs of paste to assess its mix of abrasives. But Johnson says he came to realize this approach was a dead end: each company generally only wanted one device that wasn’t widely saleable.

A cellphone manufacturer in China offered a bigger opportunity in mass production, asking Gelsight in 2015 to make a system that could check whether components like metal buttons were the right size. That led Johnson to an epiphany.

“We were making beautiful 3D maps of surfaces, but what the customer often wants is just a single number: What’s the depth of this feature, what’s the chamfer angle,” he says.

They began looking for other industries where a single measurement was needed, day in and day out. That led them to aerospace, particularly scratches and dents on high-value metal parts.

With the proceeds from the sales to the Chinese cellphone maker and lessons learned on what it took to make a system robust and simple enough to be used by ordinary workers, they made the handheld version of the device, debuting it in 2017. 

“Now rather than a single customer we’ve got the entire aerospace market, and that gives us a much better ability to grow the company,” Johnson says. 

Donaldson of Rolls-Royce says he could see hundreds of the $40,000 Gelsight handheld systems being used at his company. Gelsight also earns money on sales of replacement gel cartridges, which wear out after roughly a week of full-time use and go for $120 apiece. 

In conjunction with the $10 million fundraising round, which was led by the venture capital firm Anzu Partners, Gelsight added two former high-ranking military aviation officers to its board to help it crack the defense market, where the pressure is intense to quickly inspect hard-used aircraft and get them back in the air. “If a stone hits the blade of a helicopter, you need to know if this thing can fly,” says Johnson.

With the fresh funding, Johnson aims to double Gelsight’s current headcount of eight and build a smaller version of the wand to get into tighter spaces. He’d like to make it the same size as a human finger. “People use their finger a lot in manufacturing to feel different spaces, if they could have a digital tool that’s a similar form factor, that would be ideal,” he says.

The company is also adapting the technology to make robotic hands that can judge hardness, making them better able to handle delicate and small objects.

There are plenty of other technologies capable of fixed-station inspection, but Gelsight faces little competition from portable alternatives apart from Arizona-based 4D Technologies, which makes a handheld, laser-based device.

It’s been a twisting journey for Johnson. A saxophonist and guitarist, he double-majored in music and math at the University of New Hampshire and had contemplated trying to make a career in music until he developed an interest in computer science while studying for a master’s in electro-acoustic music at Dartmouth. Leaving academia to found a business is another move he wasn’t planning, but he says startup life reminds him of being in a rock band.

“You have a small group of people, you’re trying to break into a space and the odds are stacked against you,” he says. Fortunately, Gelsight has dodged rock band-style interpersonal drama, he says. “We’ve got a great team.”

Original article published in Forbes: https://www.forbes.com/sites/jeremybogaisky/2020/01/23/vision-through-touch-startup-gelsight-hones-measurement-system-that-could-speed-up-aerospace-work/?sh=620f05954ea8

A Helping Hand: MIT’s entrepreneurial ecosystem boosts sensor tech startup

IN 2004, PROFESSOR EDWARD “TED” ADELSON WAS FOCUSED ON A SUCCESSFUL CAREER STUDYING HUMAN AND ARTIFICIAL VISION.

Then he had children.

“I thought I’d be fascinated watching them discover the world through sight,” says Adelson, the John J. and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences at MIT. “But what I actually found most fascinating was how they explored the world through touch.”

That fascination led Adelson to invent a touch-based technology: a sort of artificial finger consisting of a gel-based skin covering an internal camera. The device could chart surface topographies through physical contact—creating something like sight through touch. That technology is now the lifeblood of GelSight, the startup that Adelson founded in 2011 along with two MIT colleagues.

Originally “a solution in search of a problem,” as Adelson describes it, GelSight now produces bench-based and handheld sensors deployed for quality control in industries such as aerospace and consumer electronics. The company is also pursuing other commercial applications. Based not far from MIT in Waltham, Massachusetts, GelSight is closing its second round of financing and appears poised for profitability.

On the surface, GelSight’s story reads like another MIT cradle-to-corporation fairy tale. But the team’s odyssey from concept to company was filled with complex passages the founders were ill-prepared to navigate.

“We were academics,” Adelson recalls. “We had this technology and thought it would be easy to transform it into a profitable enterprise. We learned very quickly that the technical invention is the easiest part for people like us. Developing a product and building a company is way harder. That requires the effort and expertise of many smart people who must work a very long time. Fortunately, we had great connections available to us through the MIT community. The resources we were able to tap into at MIT were essential in creating and sustaining GelSight.”

Cofounders meet on campus

Adelson’s first collaborator in developing the underlying technology was Kimo Johnson, who joined his laboratory as a postdoc in 2008. “Ted had invented this material that could make very precise measurements in 3-D,” says Johnson, CEO and cofounder of GelSight. “We published several papers on the technology as academics tend to do. But we also made videos and posted them on YouTube. The response was amazing. My inbox was flooded with emails asking about potential applications. That was when we realized we should form a company.”

“The technical invention is the easiest part for people like us. Developing a product and building a company is way harder,” Adelson says.

As a first step, Johnson collaborated with students in a course called iTeams at the MIT Sloan School of Management to draft a hypothetical business plan built around GelSight technology. The plan proposed a potential application in the inspection of helicopter blades. That exercise helped him understand how valuable a handheld device that employed GelSight technology could be to professionals who inspect and repair critical surfaces. “This was another piece of information that encouraged us to move forward,” says Johnson.

Adelson and Johnson met GelSight’s third cofounder in 2010 at an on-campus seminar on imaging and computer vision. János Rohály, a former MIT research scientist, had founded Brontes Technologies in 2004. That startup, which applied computer vision in dentistry, was acquired in 2006 by 3M. It was the incarnation of every MIT startup’s dream.

“After my talk, Ted and Kimo introduced themselves and told me about GelSight,” recalls Rohály, who is now CTO of GelSight. “I was captivated by their technology and invited them to make a presentation to my colleagues at Brontes. A little later I realized I was losing sleep fantasizing about their technology. In 2011, when they formed the company, they reached out to me. I had the entrepreneurial experience and the knowledge of the MIT network that could help them. And I joined the team.”

Tapping MIT’s broad network
With Rohály on board, the GelSight team turned to MIT’s teeming startup network for help plotting its next crucial steps. “MIT sits in the middle of the Boston-area startup ecosystem,” says Adelson. “This ecosystem is populated with technologists, investors, business people, lawyers, and other professionals. Together they form a vibrant group of people who are constantly networking, sharing ideas, and encouraging each other. That energy and activity is critical to launch a startup company. It was for us.”

The MIT ecosystem delivered in a big way for GelSight. The company’s founding trio received consistent support and encouragement from the MIT Venture Mentoring Service (VMS), which provided business advice, financial guidance, and introductions to potential manufacturing partners, customers, and investors. (VMS will be celebrating its 20th anniversary in 2020.)

“Neither Ted nor I had the slightest business experience,” says Johnson. “At the Venture Mentoring Service, we could rely on seasoned entrepreneurs who were ready to share their experience and expertise with us. There are so many challenges a fledgling company faces. Negotiating contracts, for example. It takes an experienced entrepreneur to know where to make concessions and where to push back. We got that and much more from the Venture Mentoring Service. In the early days, they almost served as a board of directors for us.”

GelSight got another big boost when their technology was featured in a 2011 MIT News article. “That article generated an enormous amount of interest,” says Johnson. “There are so many subscribers across so many industries. And MIT News gets copied on so many technical news sites. In fact, it was that article that connected us to a person in business development, who in turn connected us to our biggest consumer electronics customer.”

GelSight’s founders also made critical connections through the MIT Deshpande Center for Technological Innovation and the MIT Technology Licensing Office. The MIT Industrial Liaison Program put the young company in touch with a series of potential customers, including Boeing. “In our first years, we essentially bootstrapped the company, selling benchtop systems to customers in industries including cosmetics, abrasives, and aerospace,” says Johnson. “These were mostly connections we’d made through MIT. And they were enough to keep us going and slowly growing.”

Unlike many startups, which seek rapid growth and an early sale, GelSight has plotted a more gradual growth curve. In 2014, thanks to a connection obtained through the MIT network, the company received an inquiry from a China-based manufacturer of smartphones. That company had a slew of complex measurement problems they thought they might resolve with GelSight’s capacity to measure surface topography. That sale—GelSight’s first large-volume order—changed both the company’s manufacturing practices and its focus.

“Up until that point, we’d been selling single systems to R&D laboratories,” says Johnson. “This sale showed us that our real value would be in quality control. We shifted toward process development and systems for mass production and inspection.” Buoyed by the China sale, GelSight held its first round of financing in 2015. Capital infusions came from Omega Funds—a Boston-based venture capital firm that specializes in biotechnology and medical device companies—and Ping Fu, a technology innovator and investor Rohály knew from his days at Brontes. Both Fu and Omega Funds managing director Richard Lim sit on GelSight’s board of directors.

Rohály credits MIT for much of the success in GelSight’s first round of financing. “MIT gives you a tremendous boost when you approach people,” says Rohály. “Just the name alone. This is true not only in technology circles, but also in business circles. Especially with investors. If you are from MIT or have technology invented at MIT, people are interested in seeing that technology.”

He also credits MIT and its ecosystem for sustaining the GelSight enterprise through all phases of its development. “There is a can-do attitude among MIT people that I have rarely seen elsewhere,” he says. “They can attend to any problem at any level and have the confidence in their ability to solve it. Too many times, in other venues, I’ve seen people stumble before problems because they don’t trust their ability to solve them. That doesn’t exist at MIT. When there’s a problem, [MIT people] say great, let’s start working on it.”

“MIT gives you a tremendous boost… especially with investors. If you are from MIT or have technology invented at MIT, people are interested in seeing that technology,” Rohály says.

In the past few years, GelSight has hit several important milestones. In 2017, the company successfully deployed its technologies at mass production and inspection facilities. The following year, GelSight was selected to provide surface inspection technology for the manufacturing operations of a top aerospace company.

This too has helped GelSight gain credibility with investors. “Until recently, investors would ask us whether people would actually buy our products,” says Johnson. “Now, when we have major companies selecting our technology to inspect their flagship products, that’s validation.”

In 2019, the cofounders say GelSight plans to step off the brakes and hit the gas. Over the past few years, the company has spent significant time and resources resolving scientific questions about the technology to ensure it can be produced on a broader scale. Now GelSight is working to close its second round of financing. This new capital will enable the company to ramp up manufacturing and accelerate its business plan.

“We’ve been extremely attentive to managing cash flow and operations,” says Rohály. “And we’ve found a nice sweet spot in aerospace and electronics. We’re also continuing to push for customers in new spaces. The amazing thing is that 90 percent of our current customers come from inbound interest, from customers reaching out to us and asking us to solve their problems.”

The GelSight team still seeks advice from partners in MIT’s entrepreneurial ecosystem. But now the company’s leaders also offer insight and advice to other MIT inventors seeking to bring laboratory creations to market. “We’re very much a part of the broad MIT network,” says Johnson. “We’ve learned firsthand how much can be gained by experienced professionals sharing their knowledge within a larger community. Now we’re in a position to give back to the community that has helped us so much.”

Original article published in  MIT’s Spectrum Magazine:https://spectrum.mit.edu/summer-2019/a-helping-hand/