# ASME 07 Notes

## Problem Space

The goal of this page is to catalog the various current problems in robotics and current solutions to them.

# Simutaneous Localization and Mapping (SLAM)

This is a method of robotic exploration and navigation in unknown environments.

--Authors of Problem Statement:

(1) Smith and Cheeseman 1987, (2) Leonard et al. 1992b, (3) Thrun 2003, (4) Thrun et al. 2005.

## Kalman Filtering

(See also Russell and Norvig pp. 551 - 559)

For "landmark-based SLAM:"

from Square root SAM: simultaneous localization and mapping via square root information smoothing. Frank Dellaert and Michael Kaess. The International Journal of Robotics Research 25.12 (Dec 2006): p1181(23). (10275 words)

"We will primarily be concerned with landmark-based SLAM, for which the earliest and most popular methods are based on the extended Kalman filter (EKF) (Smith et al. 1987, 1990; Moutarlier and Chatila 1989a, 1989b;Ayache and Faugeras 1989; Leonard and Durrant-Whyte 1992, Leonard et al. 1992). The EKF recursively estimates a Gaussian density over the current pose of the robot and the position of all landmarks (the map). However, it is well known that the computational complexity of the EKF becomes intractable fairly quickly, and hence much effort has been focused on modifying and extending the filtering approach to cope with larger-scale environments (Newman 1999; Deans and Hebert 2000; Dissanayake et al. 2001; Castellanos et al. 1999; Julier and Uhlman 2001b; Leonard and Feder 2001; Guivant and Nebot 2001; Paskin 2003; Leonard and Newman 2003; Bosse et al. 2003; Thrun et al. 2004; Guivant et al 2004; Tardos et al. 2002; Williams et al. 2002; Rodriguez-Losada et al. 2004). However, filtering itself has been shown to be inconsistent when applied to the inherently non-linear SLAM problem (Julier and Uhlman 2001a), i.e., even the average taken over a large number of experiments diverges from the true solution. Since this is mainly due to linearization choices that cannot be undone in a filtering framework, there has recently been considerable interest in the smoothing version of the SLAM problem."

## Tools used for computation

MATLAB for mathematical computation of the algorithms. 2-D world with dot landmarks and simulated noise.

## Extended Information Filter (EIF)

[1] "A number of recent SLAM algorithms have explored reformulating the estimation problem within the context of an extended information filter (EIF), which is the dual of the extended Kalman filter (EKF) (Bar Shalom et al. 2001). The information form is often called the canonical or natural representation of the Gaussian distribution because it stems from expanding the quadratic in the exponential."

*from Visually mapping the RMS Titanic: conservative covariance estimates for SLAM information filters.(simultaneous localization and mapping ). Ryan M. Eustice, Hanumant Singh, John J. Leonard and Matthew R. Walter. *
The International Journal of Robotics Research 25.12 (Dec 2006): p1223(20). (8549 words)

[2] "Use of an Extended Information Filter (EIF) to solve SLAM in order to gain a computational saving has also been demonstrated by a number of researchers. A notable result has been that from Thrun et al. (2004), where a sparsification process is used to reduce the number of non-zero elements in the information matrix resulting in significant computational savings. Although Frese (2005) provided a proof for the approximate sparseness of the information matrix, Eustice et al. (2005b) demonstrated that the process of sparsification of the information matrix leads to inconsistent estimates.

In a recent development, Eustice et al. (2005a) show that the inclusion of the robot trajectory in the form of past robot poses in the state vector leads to an exactly sparse information matrix. The resulting Exactly Sparse Delayed State Filter (ESDSF) provides clear computational advantages when a view-based map representation is used, where a sequence of robot poses are used to describe the map. In the example presented in Eustice et al. (2005a), the "map" is not represented within the state vector and is therefore not directly updated. When both features and robot poses are included in the state vector, the EIF also provides an exactly sparse information matrix (Dellaert 2005; Frese 2006). However, in this scenario the sparseness of the information matrix is achieved through increasing the state dimension. The state dimension keeps increasing even when the robot is revisiting previous explored regions. This limitation has been overcome by Walter et al. (2005), who achieve an exactly sparse information matrix with just the last robot pose included in the state vector. The information matrix is controlled to be sparse by deliberately "kidnapping" and "relocating" the robot from time to time. Further comments on the relationship between this work and D-SLAM is presented in Section 7. "

*from D-SLAM: a decoupled solution to simultaneous localization and mapping. Zhan Wang, Shoudong Huang and Gamini Dissanayake. *
The International Journal of Robotics Research 26.2 (Feb 2007): p187(18). (9065 words)

[3] "This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment that rely upon scan-matching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature-based SLAM information algorithms, such as sparse extended information filter or thin junction-tree filter, since these methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayed-state framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the full-covariance solution. The approach is validated experimentally using monocular imagery for two datasets: a test-tank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic"

*from This paper appears in: Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]*
Publication Date: Dec. 2006
Volume: 22, Issue: 6
On page(s): 1100-1114
ISSN: 1552-3098
INSPEC Accession Number: 9225074
Digital Object Identifier: 10.1109/TRO.2006.886264
Posted online: 2006-12-04 08:23:44.0

## D-SLAM

[1] "In this paper, a decoupled solution to the SLAM problem (D-SLAM) is provided. It is demonstrated that the SLAM problem can be reformulated such that the state vector for mapping contains only the locations of all the features while the state vector for localization contains only the locations of the robot and local features. It is shown that although localization and mapping processes are decoupled, the correlations among the feature location estimates are still maintained, and when formulated in the information form, the estimation problem can be solved at a cost of O(N), where N is the total number of features in the environment."

*from D-SLAM: a decoupled solution to simultaneous localization and mapping. Zhan Wang, Shoudong Huang and Gamini Dissanayake. *
The International Journal of Robotics Research 26.2 (Feb 2007): p187(18). (9065 words)

## Particle Filter

[1]"The starting point of our analysis is the Rao-Blackwellized particle filter described by Hahnel et al. (2003). This algorithm uses a particle filter to approximate the posterior probability distribution over possible maps, and adds robot observations incrementally using a Bayesian update step. This algorithm can readily be generalized to solve the multi-robot SLAM problem, assuming that the initial pose of all robots is known. More precisely, one must know at least the initial relative pose of all but one of the robots, with the pose of the remaining robot being chosen arbitrarily. This assumption is extremely limiting in practice: it implies either that all robots have started from the same location, or that the initial locations of the robots have been surveyed by some external means."

*from Multi-robot simultaneous localization and mapping using particle filters. Andrew Howard. *
The International Journal of Robotics Research 25.12 (Dec 2006): p1243(14). (6983 words)

[2] "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches."

*from This paper appears in: Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]*
Publication Date: Feb. 2007
Volume: 23, Issue: 1
On page(s): 34-46
ISSN: 1552-3098
Digital Object Identifier: 10.1109/TRO.2006.889486
Posted online: 2007-02-05 09:19:51.0

# Localization

"In the literature, the localization task is generally divided into two phases: (a) the first step consists in the integration of dead reckoning sensors to predict the vehicle location; (b) the second phase, which is usually activated at a much slower rate, uses an absolute exteroceptive sensing mechanism to update the predicted position. Many research works are related to road vehicle applications, in which an Inertial Measurement Unit (IMU) is used to provide higher update rate of the position between two consecutive GPS data acquisitions (Nebot et al. 1997). In Bevly et al. (2001) the same pair of sensors is used for both localization and estimation of wheel diameter change and vehicle sideslip. Dissanayake et al. (2001) propose to use the non-holonomic constraints that govern the motion of a vehicle on a surface to align the IMU and significantly improve the accuracy of the localization.

Most applications tackle the problem of vehicle localization on flat and smooth soil. In such conditions, relatively accurate vehicle models can be developed, which lead to good state prediction. However, such models are not available in challenging environments such as forests, mines or craters on the surface of Mars. For these types of application the full 3D state of the rover has to be estimated and absolute positioning means such as GPS are not available. When the state prediction model is not well known and when no absolute sensing mechanism is accessible to correct the vehicle position, accurate dead reckoning becomes crucial to prevent the position error increasing too quickly. Scheding et al. (1999) present interesting results for an underground mining vehicle. In particular, they demonstrate that inertial sensors can be used to correct non-systematic errors due to soil irregularities when fused with other sensors such as wheel encoders and laser scanners. In Strelow and Singh (2003), the 6 DOFs of a vehicle are estimated using an IMU and features tracked in images taken by an omnidirectional vision system. In Mallet et al. (2000) and Olson et al. (2001) the 3D state of rough terrain rovers is tracked on the sole base of features tracking and stereovision."

*from 3D position tracking in challenging terrain. Pierre Lamon and Roland Siegwart. *
The International Journal of Robotics Research 26.2 (Feb 2007): p167(20). (9685 words)

# Fault Diagnosis

# Cooperative Mobile Robots

# Tuning

# Decision Making

High-Level Sensor-Based Controlers are able to look ahead toward obstacles and temporarily remove tasks that would result in becoming stuck or constrained by the environment.

*from Task Sequencing for High-Level Sensor-Based Control*
Mansard, N.; Chaumette, F.
Page(s): 60-72
Digital Object Identifier 10.1109/TRO.2006.889487

# Mapping/World Modeling

# Fault Adaptation

# Optimizations

# Control

# Stability

# Object Recognition

# Path Planning / Motion Planning

Rapidly-exploring Random Trees

http://ai.stanford.edu/~mitul/mpk/

# Robot-Human Interaction

# Behavior Modeling

# Novel Movement

# Tools

## COLLLADA

COLLADA is an open standard for exchanging 3D information used in the graphics community.