The BIRD Multi University Research Initiative project envisions getting mini Unmanned Aerial Vehicles to autonomously navigate through densely cluttered environments, like forests, autonomously.
Towards this end, CMU is working on reactive controllers and receding horizon control.
Direct Visual-Inertial Depth Estimation
Wind-Resistant Control for MAVs
Reactive Control using Imitation Learning
We use imitation learning to train the drone to learn the expert’s control inputs iteratively; we evaluate a number of optical features from the image stream and then perform a linear ridge regression on the feature vectors over the control inputs. What this achieves is that the generated controller learns to correlate specific changes in visual features with a particular control input (In our case, a roll left or right). For instance, considering optical flow, a tree closer to the camera image would move faster than those further away, and then as the expert avoids the tree by moving sideways, the controller would learn to associate that specific change in optical flow to a command to evade it right or left.
After the first few flights with the expert in control, we generate a preliminary controller and start flying the drone with only the controller commanding the drone. The operator then provides his/her expert input based on the image stream and then a new controller is generated. This process continues till we obtain a satisfactory controller that has visited sufficient states to be able to avoid trees on a consistent basis. For a more rigorous discussion, we recommend reading our paper
Here’s a video of the system in action:
Receding Horizon Control for MAVs
In addition to a purely reactive approach like DAgger we are working on a more deliberative approach. The video below shows the ARDrone in the motion capture lab planning to a goal location using receding-horizon control. In receding-horizon control a pre-computed set of feasible motion trajectories are evaluated on the local cost map built up by the sensors and the one which is collision-free and takes the vehicle towards the goal location is selected and traversed. The entire process is repeated several times a second to incorporate new obstacle information as the ARDrone moves.
Robust Monocular Flight in Cluttered Outdoor Environments
S. Daftry, S. Zeng, A. Khan, D. Dey, N. Melik-Barkhudarov, J. A. Bagnell and M. Hebert
ArXiv, April 2016.
Vision and Learning for Deliberative Monocular Cluttered Flight
D. Dey, K. S. Shankar, S. Zeng , R. Mehta, M. T. Agcayazi, C. Eriksen, S. Daftry, M. Hebert and J. A. Bagnell
International Conference on Field and Service Robotics (FSR), Toronto, June 2015.
Semi-Dense Visual Odometry for Monocular Navigation in Cluttered Environment
S. Daftry, D. Dey, H. Sandhawalia, S. Zeng , J. A. Bagnell and M. Hebert
IEEE International Conference on Robotics and Automation (ICRA) Workshop on Recent Advances in Sensing and Actuation for Bioinspired Agile Flight, Seattle, May 2015.
Learning Monocular Reactive UAV Control in Cluttered Natural Environments
S. Ross, N. Melik-Barkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell and M. Hebert
IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, May 2013.
Towards Scalable Visual Navigation of Micro Aerial Vehicles
MS Thesis, April 2016.
Predicting Sets and Lists: Theory and Practice
PhD Thesis, August 2015.
Locally Deliberate Autonomous Monocular Navigation in Cluttered Natural Environments – A Systems Perspective
K. S. Shankar
MS Thesis, May 2014.
Interactive Learning for Sequential Decisions and Predictions
PhD Thesis, June 2013.
Kumar S. Shankar
Narek M. Barkhudarov
This work was funded by the Office of Naval Research (ONR) through the “Provably-Stable Vision-Based Control of High-Speed Flight through Forests and Urban Environments” project. Note: All our flights are conducted with a lightweight tether for safety purposes, in accordance with FAA regulations. Also, all our code is built on the open source ROS framework. We would like to take this opportunity to thank the community.