CSE 473 Project 2

Particle filters for mobile robot localization

Please contact me (fox at cs.washington.edu) if you have any questions!!

Objective

The objective of this project is to gain an understanding of how you can implement a system that can deal with uncertain sensor information in a dynamically changing system. We will analyze this problem in the context of mobile robot localization.
 

Dates

The project will have two main parts. In the first part, you will implement the motion model and the resampling part of the particle filter. In the second part, you have to design a probabilistic model of proximity sensors and incorporate it into the particle filter.

Assigned: Wed, May 18.
Part I due: Tuesday, May 24, 9am.
Part II due: Fri, June 3, 9:30am.

Part I: Motion Model

Updated:

The initial script reader will only provide a sequence of motion steps, represented by the three parameters [rot1, rot2, trans] (see slide 73 in the localization lecture notes). Your task is to implement a particle filter that models a mobile robot without external sensors. To do so, initialize all particles (e.g., 1,000) with the same location (example start locations will be given with the scripts). For each motion update provided by the script, move each particle according to a "noisy version" of the script motion (no resampling necessary). Display the particles in the map and make sure that the distributions look similar to the ones shown on slide 22 of the localization lecture notes.

Once this works, use the map function occupied(realPos), in order to check whether or not a particle is in an obstacle of the map. If it is in the map, then assign it a weight of 0.0, otherwise, the weight should be 1.0. After moving all particles, resample them based on the weights. Only the particles in free space should "survive".
If you initialize your particles inside a hallway with the correct orientation, then the walls of the hallway should keep the particles inside the hallway. We will also provide a more complete motion script. If you use this motion script along with the correct start position, the robot should be able to keep track of its position without any sensors! Even more, if you use enough particles and initialize them uniformly inside the map, then the robot should be able to globally localize itself. Try it out.

Please send me your results per email by Tuesday, May 24th. Your results should contain screen shots that show examples of the sample set distributions during the localization runs (we will send you the scripts to analyze before the weekend).

Part II: Sensor model and robot location

In this part you will add the sensor model to the particle filter and write code to determine a good guess for the robot location.

The sensor model provides the likelihood of observing a sensor scan given the location of a particle. Each scan consists of a vector of range measurements that provide the measured distance to the next obstacle in the direction of a sensor beam. To compute the likelihood of a scan, you simply multiply the likelihoods of the individual measurements (just like Figure 25.7 in the AIMA book).

To obtain the likelihood of an individual measurement, you first determine the "true" distance TRUE_D to the next obstacle in the map (we will provide code for that). The likelihood of the measured distance MEAS_D is then given by P( MEAS_D | TRUE_D), which should be a probabilistic model like the one shown on slide 23 of the robot localization lecture slides.

Here is a suggestion for what should work: Let's assume that the "true" distance TRUE_D is below the maximum range of the sensor (500cm). In this case, there is still a certain probability MAX of measuring 500cm or more. For all other measured distances, the probability should be a mixture of a Gaussian distribution centered at TRUE_D, and a uniform distribution. Let MIX in [0:1] be the mixture parameter. Then the probability is given by

(1-MAX) * (MIX * GaussianProb( MEAS_D | TRUE_D) + (1 - MIX) * UniformProb( MEAS_D))
If TRUE_D is max range or more, then there is no nearby obstacle. In this case, the probability of a measurement of at least 500cm should be pretty high, say MAX2, and the probability of all shorter measurements should be (1-MAX2)/5000.

Once your sensor model helps the robot to better localize, you still need to extract a good estimate for where the robot actually is. The simplest idea would be to take the sample that has the highest weight before resampling. The problem of this approach is that the best estimate "jumps" aribtrarily between iterations. Thus, we want you to implement a slightly more robust estimate, which is the weighted mean of the particles. Be careful when computing the orientation, since you can't just average angles (the average of 5deg and 355deg is NOT 180deg). Be sure to adequately display your estimate, so that you can debug your code.

Extra credit: More ambitious groups can develop more advanced techniques for estimating the robot location. One idea is to use a three-dimensional grid and sum up the weights of all the particles that fall in each grid cell. Then you take the cell with the highest weight and compute the average over the particles in this cell. Even cooler, you can extract a mixture of Gaussians from the particles, and then use the Gaussian with the most particles associated with it (for more background, see Section 20.3 in the AIMA book).

What you should turn in

Background readings