MIT's blind Cheetah 3 robot can navigate without sensors or cameras

Engineers from the Massachusetts Institute of Technology have made a blind robot that moves by feeling, without the need for sensors and cameras.

Massachusetts Institute of Technology's (MIT) Cheetah 3 moves in a way the engineers describe as "blind locomotion", meaning it "feels" its path through an environment — a process the team likens to a person making their way across a pitch-black room.

The team chose to limit the robot's capacity to perceive its surroundings via cameras and external sensors — the usual seeing tools of an artificial intelligence – to enhance its navigation abilities.

"Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow," said Sangbae Kim, an associate professor of mechanical engineering at MIT who designed the robot.

"We want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast."

Cheetah 3 can navigate without prior information

Video of the Cheetah 3 — whose quadrupedal movement is similar to the internet-favourite Boston Dynamics robot — shows it climbing up a staircase with no camera or prior information about the terrain.

It's also shown walking outdoors on rough terrain, running on a treadmill, jumping, spinning while moving, and recovering its footing after being pushed or yanked around.

The Cheetah 3 navigates without sensors or cameras

These abilities are made possible by the use of two new algorithms — one for "contact detection" and another for "model-predictive control".

Algorithm determines best time to take step

The contact-detection algorithm allows the robot to choose the best time for a given leg to step on the ground instead of swinging in the air. For instance, it makes the robot react differently to stepping on a light twig versus a solid rock.

Without the algorithm, the robot would not be able to recover its balance after encountering obstacles in its path.

"This algorithm is really about, 'when is a safe time to commit my footstep?'" Kim said. "If humans close our eyes and make a step, we have a mental model for where the ground might be, and can prepare for it. But we also rely on the feel of touch of the ground."

"We are sort of doing the same thing by combining multiple [sources of] information to determine the transition time."

While sensors are out of the picture, the algorithm draws on data from gyroscopes, accelerometers and leg positions to calculate when to transition from swinging to stepping.

Robot can regain balance after attack

The second algorithm, for model-predictive control, predicts how much force a leg should apply once it makes contact with the ground.

"The contact detection algorithm will tell you, 'this is the time to apply forces on the ground,'" Kim says. "But once you're on the ground, now you need to calculate what kind of forces to apply so you can move the body in the right way."

This algorithm is also what enables the robot to recover from an attack like being pushed — it can quickly calculate how much counter-force to produce to regain its balance and keep moving where its been told to go.

The developments will be useful for robots that need to operate in uneven terrain or hard-to-reach areas under remote control. Alongside new technologies like cyborg insects and edible drones, they could be applied in disaster zones.

The MIT team will present the Cheetah 3 at the International Conference on Intelligent Robots, on in Madrid from 1 to 5 October.

In future, they plan to add cameras to the robot, but they want to develop its blind locomotion abilities further first.

"When we do add vision, even if it might give you the wrong information, the leg should be able to handle [obstacles]," Kim said. "Because what if it steps on something that a camera can't see? What will it do?"

"That's where blind locomotion can help. We don't want to trust our vision too much."

Images by MIT.