Difference between revisions of "An Extended Kalman Filter 2010"

From GICL Wiki
Jump to: navigation, search
(How the project works:)
(One intermediate revision by one user not shown)
Line 160: Line 160:
Code can be downloaded from [https://docs.google.com/leaf?id=0ByOY6vHGMB8PZjk1ZDJlODQtMWZiYS00NDI1LWEwNmMtMDUwNmJmZmY3YzJm&hl=en here]
Code can be downloaded from [https://docs.google.com/leaf?id=0ByOY6vHGMB8PZjk1ZDJlODQtMWZiYS00NDI1LWEwNmMtMDUwNmJmZmY3YzJm&hl=en here]
To run the code, run SlamSimulation.java.
== Useful Links: ==
== Useful Links: ==

Latest revision as of 20:29, 23 March 2010



In the pursuit of constructing truly autonomous mobile robots, the simultaneous localization and mapping problem is considered as one of the most important problems. Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of an unknown environment and at the same time use this map to compute its location. SLAM is considered as a chicken-egg problem as to identify a robots location the map is needed which the robot is still building. One classical solution to the SLAM problem is the EKF-SLAM method. EKF-SLAM uses Kalman filter, which is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Goal of this project is to implement and simulate the EKF-SLAM method for solving the SLAM problem.

How the project works:

Overview of the project is shown here:


Detailed process is explained below:

1. Observe and associate landmarks:

For simplicity, I assumed true data association. That means whenever the robot sees a landmark it knows which landmark is this. Each landmark has id. Landmarks are stored in an array. If landmark with id n is not in the landmark array, it is added to the array.

2. Predict next state using on odometry:

The robots state is defined as X = [ x y theta]

Here, x = x component of robot’s position

y= y component of robot’s position

theta = robot's orientation

The robot’s initial uncertainty is defined as P, which is a 3X3 diagonal matrix. The estimation of next state could be predicted using to following equation:

            Next state estimation, Xk+1 = Fk Xk+ Bk uk
            Fk = The Jacobian of the prediction model 
                          =[1	0	-DeltaY]
                           [0	1	 DeltaX]
                           [0	0	1      ]
            DeltaY = change of y = velocity*deltaT * sin(theta)
            DeltaX = change of x = velocity*deltaT * cos(theta)
            Bk = Input gain matrix 
                          = [Cos(theta) 0]
                            [Sin(theta) 0]
                            [0	         1]
            uk = control input vector 
                          = [deltaT*velocity, deltaTheta]T

Estimation of error covariance is computed using following equation:

              Error covariance estimation, Pk+1 = Fk Pk FkT + W Q WT
              Here, Q = process noise = [C * (DeltaX)2	0	0]
                                        [0	C * (DeltaY)2	0]
                                        [0	0	C * (DeltaT)2 ]
                    C = Gaussian noise
                and W = white noise = [DeltaX DeltaY DeltaTheta]T

3. Update state:

For each re-observed landmarks, update estimated state using re-observed landmarks using the following steps:

a) Compute innovation covariances:

           Innovation covariance, S = H Pk+1HT +R
           Here, H = jacobian of the measurement model
                   = [a	b	 0]
                     [c	d	-1]
                and a = (landmarkX – x)/range
                    b = (landmarkY – y)/range
                    c = (y -landmarkY)/range2
                    d = (x -landmarkX)/range2
                    landmarkX = x coordinate of the landmark
                    landmarkY = y coordinate of the landmark
                    range  = sqrt((landmarkX – x)2+ (landmarkY– y)2
                 R = the measurement error matrix 
                   = [rangeError	0 ]
                     [0      bearingError]

We also need another matrix D which is landmark's portion of final measurement model, use of D will be explained later.

                   D = [-a	-b]
                       [-c	-d]

b) Compute kalman gain:

             Kalman gain, K = Pk+1HT * S-1

c) Update estimation Xk+1 based on observation z

             Updated state, X = Xk+1+ K(z – HXk+1)

Update Pk+1,

             Updated error covariance, P = (I – KH)Pk+1 
             Where I is the identity matrix.

Java library for SLAM

I used kalman filter java library, javaslam[[1]]. This library provides functions for motion update and measurement update. Steps for using javaslam are explained below:

1. Create an instance of KalmanSLAMFilter:

         KalmanSLAMFilter kf = new KalmanSLAMFilter(X,P);
         Where X = robot’s initial state
               P = robot’s initial error covariance

2. To predict next state estimate, call the motion function:

         kf.motion(Bk.times(uk).getColumnPackedCopy(),Fk, WQWT);

here, Bk, uk, Fk and WQWT are explained in step 2 of the detailed description.

3. To update state estimate using re-observed landmarks, call the measurement function:

         kf.measurement(id, new double[] {0,0},  H,D,  R, z);
         here, id = landmark id
         z = [observedRange, observedBearing]

and H,D,R,z are explained in step 3 of the detailed description.


This implementation is evaluated with the following assumptions:

–Whatever the robot sees is a landmark

–Data association problem is not considered

–Robot odometry data and laser data are simulated

Evaluation is done using 8 landmarks and straight line path. Result is shown in figure below.

Slam result.png

In the picture, the red line is the ground truth, green line shows the odometry data and the blue line shows the robot path estimated by the kalman filter.

This implementation did not converge to the ground truth when the path is circular.


EKF Slam works well when the following situation holds:

1. Initial estimations of the robot state and error covariance are almost correct. If difference between actual robot position and initial estimation is high, EKF does not converge.

2. Error model and robot motion model are almost linear. EKF-SLAM does not converge where large angular error is present.

3. Data association is almost perfect.

In most real world situation, these assumption do not hold. More robust methods are needed in those cases.


Code can be downloaded from here

To run the code, run SlamSimulation.java.

Useful Links:

1. Kalman filter for dummies

2.SLAM for dummies

3. Range-Only Robot Localization and SLAM with Radio