<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: HW 2</title>
	<atom:link href="http://robotwhisperer.org/16831F09/?feed=rss2&#038;p=34" rel="self" type="application/rss+xml" />
	<link>http://robotwhisperer.org/16831F09/?p=34</link>
	<description>Fall 2009 Class Website</description>
	<lastBuildDate>Wed, 30 Sep 2009 21:50:51 -0400</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.4</generator>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
		<item>
		<title>By: Drew Bagnell</title>
		<link>http://robotwhisperer.org/16831F09/?p=34&#038;cpage=1#comment-8</link>
		<dc:creator>Drew Bagnell</dc:creator>
		<pubDate>Mon, 21 Sep 2009 17:52:52 +0000</pubDate>
		<guid isPermaLink="false">http://robotwhisperer.org/16831F09/?p=34#comment-8</guid>
		<description>Q: What motion model is reasonable for this data?

A: You can safely use a two-wheel &quot;trash-can&quot; robot motion model for this project, although many models have been applied successfully to the odometry data.</description>
		<content:encoded><![CDATA[<p>Q: What motion model is reasonable for this data?</p>
<p>A: You can safely use a two-wheel &#8220;trash-can&#8221; robot motion model for this project, although many models have been applied successfully to the odometry data.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Drew Bagnell</title>
		<link>http://robotwhisperer.org/16831F09/?p=34&#038;cpage=1#comment-7</link>
		<dc:creator>Drew Bagnell</dc:creator>
		<pubDate>Mon, 21 Sep 2009 17:43:07 +0000</pubDate>
		<guid isPermaLink="false">http://robotwhisperer.org/16831F09/?p=34#comment-7</guid>
		<description>FAQ:

Q: What is the relation between the frames in the map and data files. The “instruct.txt” file says the x y and theta are all in the “standard odometry frame” and the map frame uses some other coordinates implicitly.

A: The resolutions for the coordinate systems are different for the map and the data files. In the data files, everything is in cm (so a range of 235 = 2.35 meters). For the map, units are decimeters, so each pixel is 10cm by 10cm. The relationships between the thetas are undefined—figuring that out is part of the localization problem. Assume some fixed orientation for the map and assume that the orientation for the robot is completely unknown at the start. The given odometry-based poses are relative to some local origin so the only thing you care about are dx, dy, dTheta between iterations.
—-

Q: In wean.dat, which part of the matrix corresponds to (0,0) in the standard odometry frame? and in which direction do x and y point?

A: The location of (0,0) is unknown to you (hence the global localization problem). You don’t care where the robot’s local origin is—the only thing important to you are changes in x, y, theta between iterations (pose at time t+1 relative to pose at time t). The top left value in wean.dat is the top left cell of the map. +x is to the right and +y is up, but keep in mind that you could rotate all odometry readings around any point and still localize correctly since you’re only considering relative changes.
—-

Q: Reading instruct.txt, I notice that there are 2 separate entries for the coordinates of the robot and the coordinates of the laser. Could we have some more information on the robot? for example, whether the laser is fixed on the robot or not, and if not, how to transform between them.

A: The laser is fixed on the robot. You can consider this to be a problem of localizing the pose of the laser. The robot is just a fixed shape that can be trivially incorporated once the pose of the laser is known.
—-

Q: How are we do derive p(z&#124;x)? Is the sensor data in ascii-robotdata1.log enough to derive this?

A: Deciding how to compute p(z&#124;x), the sensor model, is one of the main tasks of this project. :-) You are free to try whatever techniques you desire (the approaches discussed in class and in the book are a good start). There is plenty of information in the logs to develop a good sensor model, but you have to account for the uncertainty and error of both the sensor and the environment (such as feet of people walking around you).</description>
		<content:encoded><![CDATA[<p>FAQ:</p>
<p>Q: What is the relation between the frames in the map and data files. The “instruct.txt” file says the x y and theta are all in the “standard odometry frame” and the map frame uses some other coordinates implicitly.</p>
<p>A: The resolutions for the coordinate systems are different for the map and the data files. In the data files, everything is in cm (so a range of 235 = 2.35 meters). For the map, units are decimeters, so each pixel is 10cm by 10cm. The relationships between the thetas are undefined—figuring that out is part of the localization problem. Assume some fixed orientation for the map and assume that the orientation for the robot is completely unknown at the start. The given odometry-based poses are relative to some local origin so the only thing you care about are dx, dy, dTheta between iterations.<br />
—-</p>
<p>Q: In wean.dat, which part of the matrix corresponds to (0,0) in the standard odometry frame? and in which direction do x and y point?</p>
<p>A: The location of (0,0) is unknown to you (hence the global localization problem). You don’t care where the robot’s local origin is—the only thing important to you are changes in x, y, theta between iterations (pose at time t+1 relative to pose at time t). The top left value in wean.dat is the top left cell of the map. +x is to the right and +y is up, but keep in mind that you could rotate all odometry readings around any point and still localize correctly since you’re only considering relative changes.<br />
—-</p>
<p>Q: Reading instruct.txt, I notice that there are 2 separate entries for the coordinates of the robot and the coordinates of the laser. Could we have some more information on the robot? for example, whether the laser is fixed on the robot or not, and if not, how to transform between them.</p>
<p>A: The laser is fixed on the robot. You can consider this to be a problem of localizing the pose of the laser. The robot is just a fixed shape that can be trivially incorporated once the pose of the laser is known.<br />
—-</p>
<p>Q: How are we do derive p(z|x)? Is the sensor data in ascii-robotdata1.log enough to derive this?</p>
<p>A: Deciding how to compute p(z|x), the sensor model, is one of the main tasks of this project. <img src='http://robotwhisperer.org/16831F09/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' />  You are free to try whatever techniques you desire (the approaches discussed in class and in the book are a good start). There is plenty of information in the logs to develop a good sensor model, but you have to account for the uncertainty and error of both the sensor and the environment (such as feet of people walking around you).</p>
]]></content:encoded>
	</item>
</channel>
</rss>
