From GICL Wiki
Revision as of 19:55, 12 November 2007 by Pwt23 (Talk | contribs)

Jump to: navigation, search



Using only simple behaviors, we want to program our Creates to seek out and bring food back to a home base. At the outset, your robot will begin its search for food by performing random movements and leaving a pheromone in each location visited. Your robot will be working alone and must perform 3 iterations of finding food then returning home.

robot find food... mmmm


Program the Creates such that they can search and find virtual food using only simple behaviors such as:

  • local movements
    • limited memory (i.e. only know the current location and none of the previous locations). The robot will not be allowed to perform path planning or use any memory other than the virtual environment map.
    • maintain current location in memory
  • try to avoid walls using the virtual map (otherwise the internal odometry could be thrown off).
  • read pheromone
  • pick up food
  • leave pheromone


The robot will begin by searching for food. This will be done via some sort of searching algorithm. As the robot moves, it will leave pheromones at each spot it has visited. To model decay, these pheromones keep track of the elapsed time since since you left the pheromone.

Once food is found, the robot will return to its original place (the colony) by following the pheromones that are the oldest (why is that?).

When the robot has navigated back to the colony, it must again follow a pheromone back to the food (can you figure out which ones?) and return back to the colony.


Virtual walls are what our ants will use as food. When our Create encounters a virtual wall, it will "pick up food", then return home by following it's original pheromone trail while leaving pheromones for the other robots to find. They will be positioned from above with the IR beam pointing downward to simulate the effect of a food source. There will be two virtual walls, symbolizing two food sources.

Virtual Pheromone

Once a robot encounters a virtual wall, it will begin its journey home leaving a pheromone trail in the virtual environment. Leaving a virtual pheromone in the environment is accomplished by sending a location to the Environment Java Class.

Other robots will determine if there is a pheromone trail by querying the map with its current location. The pheromones will return values based on the last time you have visited the current location. For example, a pheromone on space X will return 999 if you have left a pheromone there 10 minutes ago while a pheromone will return 10 if you had just recently visited that location.


We will be providing you an Environment Java class that the robot will use to represent the hallway. Do not edit this class. Your robot may use the following methods from Environment:

  • Return a Pheromone value at this cell location.
long getPheromone(int x_, int y_)
  • Leaves a pheromone value at this cell location.
void leavePheromone(int x_, int y_);
  • Returns whether the cell location has a colony in it
boolean isColony(int x_, int y_)
  • Returns whether the cell location is an obstacle (ie. a wall)
boolean isObstacle(int x_, int y_)

The map is also able to print the current state of the world in an XPM format. The pheromone trail is color coded where red spaces is where the robot has traveled most recently and blue is least recently. The following method will allow you to draw the map:

  • void drawEnvironment(String outputFileName);


  • You MUST use the pheromones to navigate your robot to the food source and back to the colony. Use of map probing or other god-like techniques will result in grade deduction. Remember, you are programming like you are the ant. Be like ant, my friend.


  • We will provide Java skeleton code as a framework for build the specific behaviors.

The students are responsible for turning in:

  • A final environment map
  • Source code
  • A 2 page write up of the assignment
    • Approach and implementation
    • What worked/didn't work
    • Possible other approaches
    • References


  • Dorigo, M. and Caro, G.D. and Gambardella, L.M., Ant Algorithms for Discrete Optimization, Artificial Life, 1999 vol 5 no. 2, pp 137-172.