BY Fast Company 3 MINUTE READ

A team at the Massachusetts Institute of Technology has developed a robotic arm that can slide one arm of a vest onto a person. And that’s more impressive than it may initially sound.

A few years ago, I was eating in a retirement home cafeteria when a woman in her eighties called me over and asked me to help put on her cardigan. I said no problem, then grabbed a sleeve and tried to get it on her arm. That’s when I realized that her body had stiffened over the years, and her back was hunched. I didn’t know how to line up the geometry between her arm and the sleeve without injuring her.

“You’re not gonna break me!” she quipped, reading my indecision. And so I bent her limbs and shoulders harder than I would have thought safe. After a minute of nervous coaxing, her cardigan was on, and she returned to her lunch.

This job was far more difficult than I anticipated and makes the latest research out of MIT that much more meaningful. The MIT team has trained a robot to safely slide a vest onto a human arm, which is an early but important step in creating a robot that could completely dress an aging or disabled person.

Robots have actually been able to dress themselves for a decade now. Such an achievement is possible only because a robot knows the dimensions of its own body and exactly what it intends to do next. For a robot to dress someone else is an entirely different challenge because it requires it to intuit someone else’s next move, lest the robot make an error that might twist a wrist or dislocate an elbow.

“In this work, we focus on a planning technique,” explains Shen Li, a PhD candidate in the Interactive Robotics Group at MIT and the author of the new paper published for Science and Systems. “Robots predict human motion, then design a plan that’s safe based upon the prediction. If I dress a kid or adult, they might have different reactions. So you have to predict what they’ll do.”

This prediction, in the human brain, is an invisible process. We don’t fully understand how a person approaches a situation like sliding a shirtsleeve onto another human.

Li and his collaborators took a stock robot arm and fit it with a 3D tracker, which can see the movement of the person waiting to be dressed. Their breakthrough is in the software, which not only recognizes someone’s position in the moment, but considers how they might move next—in order to both successfully get them dressed, and not injure them in the process.

To anticipate one of, say, 100 different possible movements, the system has to predict the 100 possible movements first, and create a path that ensures a person’s safety, no matter how they actually move.

“We’re not only predicting the most likely human movement, but the entire uncertain human set of the future,” Li says, noting that this is an especially conservative approach that can mean you are getting dressed at a snail’s pace.

However, over time, the software learns from the person getting dressed. It can slowly disregard movements a person never makes, editing down the possible list to something more probable and practical.

“In the beginning the robot might be very conservative, very slow,” Li says. “After the robot is more certain about the human [it’s faster].” Even as the robot speeds up, it’s never using a level of force that might injure someone, and the software is trained to respond to surprising movements at any moment, like if you picked up a TV remote and started flipping the channels while being dressed.

For the next steps of research, Li would like to add a full sleeve to the vest, and develop the software to accommodate for the extra friction of pulling a garment onto an appendage. After that step is figured out, pulling on a second sleeve, or a pair of pants, will be easier.

The other big shortcoming in this research is that the current robot starts with a human fist already pulled through a sleeve hole, so the team would like to solve that issue, too, dressing a human from the earliest steps in the process. Li notes that nurses will often take a person’s hand and stick it through a sleeve, hinting that ultimately, a second robot arm could make this task a lot easier.

These may sound like baby steps of development, in a world where machine learning models seem to solve massive problems like computer vision and object recognition overnight. Li doesn’t balk when I suggest we might be a decade from training a robot to dress and undress someone in their entirety, but notes that it’s remarkably hard to work with humans rather than things.

“How do you develop an algorithm to learn [human behavior] efficiently?” Li asks. “You can’t just have a human there doing the task [a million times].’”


Mark Wilson is a senior writer at Fast Company who has written about design, technology, and culture for almost 15 years. His work has appeared at Gizmodo, Kotaku, PopMech, PopSci, Esquire, American Photo and Lucky Peach.