Course Project

by Drew Bagnell on September 21, 2009

Fall 2009

The course project is an opportunity for you to make a substantial exploration into how techniques covered in class can be applied to a robotics problem of interest to you.  Since this  project requires a substantial amount of work, we require you to work in groups of 3 people (any exceptions to this must be approved ASAP by the course staff).  The topic of your project is completely up to you as long as it learning or probabilistic inference are a key component of your approach.  Your project idea needs to be approved before proceeding.


By September 29th – Members of your group and a 1-2 paragraph description of your problem and proposed approach due by email to both Drew and Ranqi:  This is our chance to correct any issues before you get too far along.

Oct 1st – Initial Project Presentations.  A more detailed 1-2 page description of  problem to be addressed and proposed approach due.  Each group will also make a 5-10

minute presentation to the rest of the class.

November 5th Progress report due describing progress and results so far (no more than 4 pages).

December 3rd – Final project presentations to class (10 minutes for each group)

TBA – Final written reports due.

Project Proposals:

Be sure that your project proposals address the following questions:

1) What’s novel about your approach?

2) How does learning and probabilistic inference play a key role?

3) What will have completed by November 4th and what will you have completed by the end

of the semester?

4) How will we measure success?

5) What’s the potential impact of success?

6) What are the key technical issues/concerns?

7) Be prepared to answer questions about related work…

Bring a computer to present with (and probably a back-up as well) and be ready to go as soon as

your turn comes.

Possible Project Suggestions:

1) Discriminative mapping training:  Using a data-set (say from DepthX Vehicle, or Intel Personal Robot) with known positioning (established by some alternate method) and a measured map, train a filter using Conditional Random Fields to recover that map. This is a different approach than standard generative model approach.

2) Develop a novel variant of occupancy mapping that uses Expectation Propagation to manage efficiently dependencies within the map.

3) Apply adaptive online learning algorithms to a data-set of commodities prices.  Compare various Online Convex Programming techniques: which lead to higher performance?

4) Train a particle filter conditionally by labeling positions inside a map. Apply this to a wifi  data-set collected in Pittsburgh, or to learning to localize an outdoor mobile robot.  Start from this paper (CRF-filters: Conditional Particle Filters for Sequential State Estimation.

B. Limketkai, D. Fox, and L. Liao. ICRA-07:

07.abstract.html), but train using online learning techniques to maximize conditional

likelihood instead of using the perceptron algorithm.

Q: What is the relation between the frames in the map and data files. The “instruct.txt” file says the x y and theta are all in the “standard
odometry frame” and the map frame uses some other coordinates
A: The resolutions for the coordinate systems are different for the map and the data files. In the data files, everything is in cm (so a range of 235 = 2.35 meters). For the map, units are decimeters, so each pixel is 10cm by 10cm. The relationships between the thetas are undefined—figuring that out is part of the localization problem. Assume some fixed orientation for the map and assume that the orientation for the robot is completely unknown at the start. The given odometry-based poses are relative to some local origin so the only thing you care about are dx, dy, dTheta between iterations.
Q: In wean.dat, which part of the matrix corresponds to (0,0) in the standard odometry frame? and in which direction do x and y point?
A: The location of (0,0) is unknown to you (hence the global localization problem). You don’t care where the robot’s local origin is—the only thing important to you are changes in x, y, theta between iterations (pose at time t+1 relative to pose at time t). The top left value in wean.dat is the top left cell of the map. +x is to the right and +y is up, but keep in mind that you could rotate all odometry readings around any point and still localize correctly since you’re only considering relative changes.
Q: Reading instruct.txt, I notice that there are 2 separate entries for the coordinates of the robot and the coordinates of the laser. Could we have some more information on the robot? for example, whether the laser is fixed on the robot or not, and if not, how to transform between them.
A: The laser is fixed on the robot. You can consider this to be a problem of localizing the pose of the laser. The robot is just a fixed shape that can be trivially incorporated once the pose of the laser is known.
Q: How are we do derive p(z|x)? Is the sensor data in ascii-robotdata1.log enough to derive this?
A: Deciding how to compute p(z|x), the sensor model, is one of the main tasks of this project. :-) You are free to try whatever techniques you desire (the approaches discussed in class and in the book are a good start). There is plenty of information in the logs to develop a good sensor model, but you have to account for the uncertainty and error of both the sensor and the environment (such as feet of people walking around you).

{ 2 comments… read them below or add one }

Drew Bagnell September 21, 2009 at 7:26 pm

MIT posted a bunch of data logs and visualization software from the Urban Challenge that some of you might be interested in exploring for possible project ideas:

Thanks to Dan for the info on this. If anyone else knows of other similarly interesting data sets please share.

jsinglet September 24, 2009 at 8:23 pm

Is anyone looking for someone to work with for this project? I don’t know any of you and I am looking for a group. Please let me know if you or your group is looking for another person. My email is


Previous post: reading material

Next post: Class Canceled Thurs, September 24th