Learning Locomotion

Our team works on applying machine learning techniques to difficult problems in robotics and particularly on the interface between machine learning and planning. The DARPA-funded Learning Locomotion project, led by Chris Atkeson, Drew Bagnell, and James Kuffner,  is designed to push the state-of-the-art in legged locomotion forward. The Little Dog robot must traverse complex terrains that are unrehearsed. Our performance metrics are speed and reliability.

Key technologies developed as part of this program include fast, machine learning-style approaches to planning and trajectory generation and imitation learning techniques that enable hierarchical and flat planners to be both effective and very fast. For footstep prediction, more information is available here.

In the News:

DARPA Pushes Machine Learning with Legged LittleDog Robot

With phase two testing wrapped up, six teams of roboticists are focused on improving LittleDog’s speed and agility

By Larry Greenemeier

If BigDog is the Defense Advanced Research Projects Agency’s (DARPA) dopey but lovable Great Dane, LittleDog is its extremely intelligent—if high-strung—Jack Russell terrier.

Shortly after DARPA commissioned Boston Dynamics to build its BigDog autonomous legged robot, the agency decided it should broaden its research to include a likewise legged device that was aware of its environment and deliberately placed its feet to avoid falling. LittleDog’s software spells out the robot’s route and its cameras and sensors help it “see” obstacles so it can avoid missteps.

While BigDog’s quick thinking and nimbleness has its limits—particularly if it happens to step off of a high ledge or cliff, LittleDog’s specialty is being able to sense its surroundings and avoid such dangers all together. It methodically moves over obstacles much larger than its leg length and body size—it measures 11.8 by 7.1 inches (30 by 18 centimeters), stands 5.5 inches (14 centimeters) tall and weighs 4.9 pounds (2.2 kilograms). “We wanted LittleDog to deal with the locomotion problem,” says Larry Jackel, a DARPA program manager responsible for robotic vehicles who spent four years at the agency until June 2007 and now works as an independent consultant.


Related LairLab Publications and Preprints

Optimization and Learning for Rough-Terrain Legged Locomotion, Matt Zucker, Nathan Ratliff, Martin Stolle, Joel Chestnutt, J. Andrew Bagnell , Christopher G. Atkeson, James Kuffner.

CHOMP: Gradient Optimization Techniques for Efficient Motion Planning

Nathan Ratliff, Matt Zucker, J. Andrew Bagnell, and Siddhartha Srinivasa
Proc. IEEE Int’l Conf. on Robotics and Automation, May, 2009.

Learning to search: Functional gradient techniques for imitation learning

Nathan RatliffDavid Silver, and J. Andrew (Drew) Bagnell
Autonomous Robots, Vol. 27, No. 1, July, 2009, pp. 25-53.

Imitation Learning for Locomotion and Manipulation

Nathan RatliffJ. Andrew (Drew) Bagnell, and Siddhartha Srinivasa
IEEE-RAS International Conference on Humanoid Robots, November, 2007.

Boosting Structured Prediction for Imitation Learning

Nathan RatliffDavid BradleyJ. Andrew (Drew) Bagnell, and Joel Chestnutt
Advances in Neural Information Processing Systems 19, 2007.