It was a pleasure having you in class.

]]>Note that as stated in class, the final is *not* open book nor open notes

]]>Should be available online to you, but if not:

16831 A STAT TECH IN ROBOTCS Mon. December 14 5:30p.m.-8:30p.m. NSH 3002

]]>Just a reminder that course assessments are really important for the course. Also, please send me personally feedback– I always appreciate it. Thanks again for being an outstanding class.

]]>*No groups*– individual assignments ony. Due Dec. 12 by noon EST.

Take the existing HW4 data-set (or alternately a labeled ladar data-set you prefer) you have used for classification and explore two things with it:

1 ) Implement exponentiated gradient descent or an L1 regularized method (or both simultaneously) on a loss function you have already implemented. (Log loss, hinge loss, squared loss etc…)

Are the exponential gradient algorithms good performers? Take the current feature set and:

- Add a large number of random features

- Add a large number of features that are noisy, corrupted versions of the features already in the

data-set.

How do the various methods perform in these situations? Compare and contrast with l2 methods.

2) Implement a technique for “contextual classification”. Possible options:

a) Implement the graph cut method in http://www.ri.cmu.edu/publication_view.html?pub_id=6297

b) Implement the multiple k-means clustering of data-points/voting scheme pioneered in http://www.cs.uiuc.edu/homes/dhoiem/publications/Hoiem_Geometric.pdf and use features generated from that. (For Discussed of ladar points see http://www.ri.cmu.edu/publication_view.html?pub_id=6297)

c) Put a continuous valued random field using, e.g., the l1 Total Variation norm between data-points. Using optimization to find the optimal assignment for each ladar point. Feel free to do 2 classes only here.

d) Propose some other method to use multiple related/nearby labels to improve the structured prediction.

Can you get improvements on this data-set? Why or why not?

]]>