# Difference between revisions of "An Extended Kalman Filter 2010"

(→How the project works:) |
|||

Line 25: | Line 25: | ||

The robots state is defined as X = [ x y theta] | The robots state is defined as X = [ x y theta] | ||

Here, x = x component of robot’s position | Here, x = x component of robot’s position | ||

− | + | y= y component of robot’s position | |

− | + | theta = robot's orientation | |

The robot’s initial uncertainty is defined as P, which is a 3X3 diagonal matrix. | The robot’s initial uncertainty is defined as P, which is a 3X3 diagonal matrix. | ||

Line 61: | Line 61: | ||

Innovation covariance, S = H P<sub>k+1</sub>H<sup>T</sup> +R | Innovation covariance, S = H P<sub>k+1</sub>H<sup>T</sup> +R | ||

− | |||

Here, H = jacobian of the measurement model | Here, H = jacobian of the measurement model | ||

= [a b 0] | = [a b 0] | ||

Line 91: | Line 90: | ||

Where I is the identity matrix. | Where I is the identity matrix. | ||

''' | ''' | ||

− | Java library for SLAM''' | + | == |

+ | Java library for SLAM == | ||

+ | ''' | ||

I used kalman filter java library, javaslam[[http://ai.stanford.edu/~paskin/slam/tjtf-java.zip]]. This library provides functions for motion update and measurement update, I need just need to build the matrics and plug them in the functions. Steps for using javaslam are explained below: | I used kalman filter java library, javaslam[[http://ai.stanford.edu/~paskin/slam/tjtf-java.zip]]. This library provides functions for motion update and measurement update, I need just need to build the matrics and plug them in the functions. Steps for using javaslam are explained below: | ||

Line 102: | Line 103: | ||

2. To predict next state estimate, call the motion function: | 2. To predict next state estimate, call the motion function: | ||

− | kf.motion(Bk.times(uk).getColumnPackedCopy(),Fk, W Q W T); | + | kf.motion(Bk.times(uk).getColumnPackedCopy(),Fk, W Q W<sup>T</sup>); |

here, Bk, uk, Fk and W Q W T are explained in step 2 of the detailed description. | here, Bk, uk, Fk and W Q W T are explained in step 2 of the detailed description. | ||

Line 116: | Line 117: | ||

''' | ''' | ||

+ | |||

== Results: == | == Results: == | ||

''' | ''' |

## Revision as of 03:16, 21 March 2010

## Contents |

## Introduction:

In the pursuit of constructing truly autonomous mobile robots, the simultaneous localization and mapping problem is considered as one of the most important problems. Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of an unknown environment and at the same time use this map to compute its location. SLAM is considered as a chicken-egg problem as to identify a robots location the map is needed which the robot is still building. One classical solution to the SLAM problem is the EKF-SLAM method. EKF-SLAM uses Kalman filter, which is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Goal of this project is to implement and simulate the EKF-SLAM method for solving the SLAM problem.

## How the project works:

Overview of the project is shown here:

Detailed process is explained below:

1. Observe and associate landmarks:

For simplicity, I assumed true data association. That means whenever the robot sees a landmark it knows which landmark is this. Each landmark has id. Landmarks are stored in an array. If landmark with id n is not in the landmark array, it is added to the array.

2. Predict next state based on odometry:

The robots state is defined as X = [ x y theta] Here, x = x component of robot’s position y= y component of robot’s position theta = robot's orientation

The robot’s initial uncertainty is defined as P, which is a 3X3 diagonal matrix. The estimation of next state could be predicted using to following equation:

Next state estimation, X_{k+1}= F_{k}X_{k}+ B_{k}u_{k}Here, F_{k}= The Jacobian of the prediction model =[1 0 -DeltaY] [0 1 DeltaX] [0 0 1 ] DeltaY = change of y = velocity*deltaT * sin(theta) DeltaX = change of x = velocity*deltaT * cos(theta) B_{k}= Input gain matrix = [ Cos(theta) 0] [Sin(theta) 0] [0 1] u_{k}= control input vector = [deltaT*velocity, deltaTheta]^{T}

Estimate of error co-variance is computed:

Error covariance estimation, P_{k+1}= F_{k}P_{k}F_{k}^{T}+ W Q W^{T}Here, Q = process noise = [C * (DeltaX)^{2}0 0] [0 C * (DeltaY)^{2}0] [0 0 C * (DeltaT)^{2}] C = Gaussian noise and W = white noise = [DeltaX DeltaY DeltaTheta]^{T}

3. For each re-observed landmarks, update estimated state using re-observed landmarks using the following steps:

a) Compute innovation covariances:

Innovation covariance, S = H P_{k+1}H^{T}+R Here, H = jacobian of the measurement model = [a b 0] [c d -1] and a = (landmarkX – x)/range b = (landmarkY – y)/range c = (y -landmarkY)/range^{2}d = (x -landmarkX)/range^{2}landmarkX = x coordinate of the landmark landmarkY = y coordinate of the landmark range = sqrt((landmarkX – x)^{2}+ (landmarkY– y)^{2}R = the measurement error matrix = [rangeError 0 ] [0 bearingError]

We also need another matrix D which is landmark's portion of final measurement model, use of D will be explained later.

D = [-a -b] [-c -d]

b) Compute kalman gain:

Kalman gain, K = P_{k+1}H^{T }* S^{-1}

c) Update estimation X_{k+1} based on observation z

Updated state, X = X_{k+1}+ K(z – HX_{k+1})

Update P_{k+1},

Updated error covariance, P = (I – KH)P_{k+1}Where I is the identity matrix.

== Java library for SLAM == I used kalman filter java library, javaslam[[1]]. This library provides functions for motion update and measurement update, I need just need to build the matrics and plug them in the functions. Steps for using javaslam are explained below:

1. Create an instance of KalmanSLAMFilter:

KalmanSLAMFilter kf = new KalmanSLAMFilter(X,P); Where X = robot’s initial state P = robot’s initial error covariance

2. To predict next state estimate, call the motion function:

kf.motion(Bk.times(uk).getColumnPackedCopy(),Fk, W Q W^{T});

here, Bk, uk, Fk and W Q W T are explained in step 2 of the detailed description.

3. To update state estimate using re-observed landmarks, call the measurement function:

kf.measurement(id, new double[] {0,0}, H,D, R, z); here, id = landmark id z = [observedRange, observedBearing]

and H,D,R,z are explained in step 3 of the detailed description.

## Results:

This implementation is evaluated with the following assumptions:

–Whatever the robot sees is a landmark

–Data association problem is not considered

–Robot odometry data and laser data are simulated

Evaluation is done using 11 landmarks and straight line path.
Result is shown in figure below.

In the picture, the red line is the ground truth, green line shows the odometry data and the blue line shows the robot path estimated by the kalman filter.

This implementation did not converge to the ground truth when the path is circular.

## Conclusion:

EKF Slam works well when the following situation holds:

1. Initial estimations of the robot state and error covariance are almost correct. If difference between actual robot position and initial estimation is high, EKF fails to predict correct next state.

2. Error model and robot motion model are almost linear. EKF-SLAM does not converge if large angular error is present

## Code:

Code can be downloaded from here