BIRD MURI

Post image for BIRD MURI

The BIRD Multi University Research Initiative project envisions getting mini Unmanned Aerial Vehicles to autonomously navigate through densely cluttered environments, like forests, autonomously.

Towards this end, CMU is working on reactive controllers and receding horizon control.

Reactive Controller using DAgger

 
We use imitation learning to train the drone to learn the expert’s control inputs iteratively; we evaluate a number of optical features from the image stream and then perform a linear ridge regression on the feature vectors over the control inputs. What this achieves is that the generated controller learns to correlate specific changes in visual features with a particular control input (In our case, a roll left or right). For instance, considering optical flow, a tree closer to the camera image would move faster than those further away, and then as the expert avoids the tree by moving sideways, the controller would learn to associate that specific change in optical flow to a command to evade it right or left.

After the first few flights with the expert in control, we generate a preliminary controller and start flying the drone with only the controller commanding the drone. The operator then provides his/her expert input based on the image stream and then a new controller is generated. This process continues till we obtain a satisfactory controller that has visited sufficient states to be able to avoid trees on a consistent basis. For a more rigorous discussion, we recommend reading our paper

Here’s a video of the system in action:

 

Receding Horizon Control

 
In addition to a purely reactive approach like DAgger we are working on a more deliberative approach. The video below shows the ARDrone in the motion capture lab planning to a goal location using receding-horizon control. In receding-horizon control a pre-computed set of feasible motion trajectories are evaluated on the local cost map built up by the sensors and the one which is collision-free and takes the vehicle towards the goal location is selected and traversed. The entire process is repeated several times a second to incorporate new obstacle information as the ARDrone moves.

Team

 

Martial Hebert,
Professor,
Robotics Institute

J. Andrew Bagnell,
Associate Professor,
Robotics Institute

Standing L-R : Stéphane, Dey, KSS, Andy, Narek

Stéphane Ross, PhD Student, Robotics Institute

Debadeepta Dey, PhD Student, Robotics Institute

Andreas Wendel, PhD Student, Graz University of Technology

Narek Melik-Barkhudarov, MS Student, Robotics Institute

Kumar Shaurya Shankar, Research Staff, Robotics Institute

Miscellaneous

 
All our code is built on the open source ROS framework. We would like to take this opportunity to thank the community.
All our flights are conducted with a lightweight tether for safety purposes.

Sponsors

 
This work was funded by the Office of Naval Research through the “Provably-Stable Vision-Based Control of High-Speed Flight through Forests and Urban Environments” project.