Spring2011RobotLabAssignment3

From GICL Wiki
Revision as of 18:36, 11 May 2011 by Dsi23 (Talk | contribs)

Jump to: navigation, search

Contents

Assignment 3: "Where am I"

In this assignment, you will use the Roomba model robot to perform simultaneous location and mapping (SLAM). Your robot will be placed in a random location in the world, and needs to construct a map of the world. The robot does not know it's location or orientation. The robot should create the map as it explores it, that is, it should NOT wait until it has explored the entire world to produce a map. If your robot is stopped at an arbitrary point in time, it should have some kind of map written, representing what it has seen so far. You proabably want to look into "occupancy grid mapping" to do this assignment.

Part 1

In the first part of the problem, you will generate a map using dead-reckoning. You may not use getX(), getY(), getYaw(), getData(), or getGeom() in the Position2DInterface class. If you find some other functions that report the robots location or orientation, you can't use them either. You may use the laser range finder.

Part 2

In the second part, you will generate a map using an odometer-based technique. You should use the getX(), getY(), getYaw(), getData(), or getGeom() functions in the Position2DInterface class. You may use the laser range finder.

Extra Credit

In the optional extra credit, you will produce a map that shows the 'diff' between maps produced in Part 1 and Part 2, then combine measurements from these techniques to make a more accurate map (and visualize the improvement).

Configuration Files

The configuration files are provided here.

Inside there is a client skeleton files which you are welcome to use or not use. There is also a script, create_world.sh, to generate the slam.world file. Please note that it will place the robot in a different starting location each time. This script will be run before your robot is tested, so you cannot assume that it will start in any particular location. It is possible that the script places your robot inside an obstacle. If that happens, just run it again.

Additional Requirements

  • You may not hard code turns or rely on anything particular to the provided map. If your robot were to run on a map with the landmarks in a different location it should still work.
  • The maps produced by your robot should be in some reasonably common image format such as XPM, PNG, JPEG, etc.

If you are not sure about the above requirements, feel free to email the TAs.

Extra Credit

Combine the estimates from both the dead-reckoning and the odometer-based approaches in some reasonable way to create a map. In order to get points for this part, your README file must describe the data fusion technique you used.

Submission

  • Submit a tarball to the TA's: Dustin Ingram (dsi23@drexel.edu) & Aaron Rosenfeld (ar374@drexel.edu) containing:
    • Your robot client code for parts 1&2
    • A README file with any necessary build instructions or peculiarities of your program
    • Images of your maps for parts 1&2
  • If something does not work properly or is incomplete, you must say so in the README.

Grading

  • Quality of dead-reckoning-based map: up to 12pts.
    • Periodically writing a map file: 2 pts.
    • Capturing all of the shapes on the map: 5 pts.
    • Accuracy of the shapes and their relative positions: 5 pts.
  • Quality of odometry-based map: up to 12 pts.
    • Periodically writing a map file: 2 pts.
    • Capturing all of the shapes on the map: 5 pts.
    • Accuracy of the shapes and their relative positions: 5 pts.
  • Code is readable and documented: 4 points
  • Quality of techniques for fusing (combining) the data from both approaches, and quality of the map it produces: up to 6 extra points.

TOTAL = 28 points + 6E.C.

Map

Slam-map.png