User:Aps43

From GICL Wiki
Revision as of 17:58, 11 June 2012 by Aps43 (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Robot Lab Spring 2012. Worked on OpenCV and object tracking. Below is a description of the work I've done, how to run my code, and some points on future development. I would suggest running the code alongside reading the descriptions, as this is very visual in nature and seeing the demos will help in understanding.

Contents

Summary

I have worked on various methods of analyzing input from a single camera, such as a webcam. All of my implementations have been in C++ using OpenCV. The algorithms implemented are as follows:

  • Background subtraction
  • Optical flow
  • Color tracking
  • Blob tracking
  • Histogram of Oriented Gradients (untested)

I have made a demo of each of the algorithms. Data from each of these demos can be pulled and used for various robot tasks, like movement, mapping, etc. Each demo can simply be run with no arguments. They will pop up one or more windows with a camera image and some other data. Hit the escape key to exit.

All of the source code can be found here: Media:OpenCV.tar

To run it you will need to have OpenCV installed. An extra library is needed for the blob tracking. Installing is explained below.

Installing OpenCV

Go here: http://opencv.willowgarage.com/wiki/InstallGuide

They can explain it better than me (and keep up to date). Just find your flavor of OS and OpenCV version on that page and follow the instructions.

Installing cvBlob

cvBlob is a library needed for the blob tracking code.

Background Subtraction

In a nutshell, this algorithm detects movement by looking at recent changes in the image.

When started, the demo will show a completely black window. When the image changes (meaning there was movement), the parts of the image that changed will show up as white. Over time, it will "learn" a new background, so if the camera turns and the whole screen goes white (because everything moved), then it will adjust to the new background and turn the window black again.

Optical Flow

My implementation of this algorithm uses edge detection to find "points of interest" and then tracks these points frame to frame. It also keeps track of how far each point moved between each pair of frames, which is an indicator of perceived speed.

For each tracked point, the demo will draw a line from where the point was the previous frame to where the point is in the current frame. The color of the line depends on how far the point moved, red being not far, green being medium distance, and blue meaning very far.

Assuming all of the tracked points are the same distance from the camera, the length of the line drawn (meaning how far the point traveled) is an indicator of speed. More movement means faster. However, if the points are all different distances from the camera, the line length is a function of speed and distance because distant objects will travel less per frame than close ones moving at the same speed.

Using these properties, it is possible to do some image segmentation based on approximate distance. For example, assuming everything but the camera is stationary, if a set of ten points are moving quickly while the rest of the points are moving slowly, those ten points are likely part of the same object and that object is close to the camera. The same principal can be applied to moving objects. If a set of points are moving in the opposite direction than all of the other points, then they are part of an object moving in a different direction.

Color Tracking

A very simple

Blob Tracking

Histogram of Oriented Gradients

Points of Robot Integration