Optimization and Learning for Rough-Terrain Legged Locomotion

December 12, 2009

Optimization and Learning for Rough-Terrain Legged Locomotion, Matt Zucker, Nathan Ratliff, Martin Stolle, Joel Chestnutt, J. Andrew Bagnell , Christopher G. Atkeson, James Kuffner.

We present a novel approach to legged locomotion over rough terrain that is thoroughly rooted in optimization. This approach relies on a hierarchy of fast, anytime algorithms to plan a set of footholds, along with the dynamic body motions required to execute them. Components within the planning frame-work coordinate to exchange plans, cost-to-go estimates, and “certi?cates” that ensure the output of an abstract high-level planner can be realized by lower layers of the hierarchy. The burden of careful engineering of cost functions to achieve desired performance is substantially mitigated by a simple inverse optimal control technique. Robustness is achieved by real-time re-planning of the full tra jectory, augmented by re?exes and feedback control. We demonstrate the successful application of our approach in guiding the LittleDog quadruped robot over a variety of rough terrains. Other novel aspects of our past research e?orts include a variety of pioneering inverse optimal control techniques as well as a system for planning using arbitrary pre-recorded robot behaviors.

Previous post:

Next post: