Lab 5: Localization
Our briefing slides can be found here.Overview and Motivations (Mia)
For lab 5, our goal was to give the robot the ability to find its position in a known environment. Localization is important for our future tasks in path planning and following. Without being able to check the robots position as it is driving, the path follower would rely on the robot being able to precisely follow navigation commands. However as the car drives, error is builds up between this estimated position and the actual position, as a result of the surface it is driving on as well as wheel alignment and servo error. Therefore we must not solely rely on interoceptive sensors to localize. Instead we use a particle filter which takes input from both the interoceptive motion sensors and LIDAR data. The particles in the particle each represent a guess for the robot's position and heading. As the car drives, the motion model updates each particle based on the odometry. Then at a frequency of 8 Hz, the particle filter prunes the particles based on whether the LIDAR scan matches with the position, duplicating the points that were not pruned. After testing in both the simulator and on the actual robot, we tuned the motion and sensor models to get better results. We found that the robot is able to localize well in areas with lots of features, but struggles in long uniform hallways.Proposed Approach
Motion Model (Mia)
The motion model takes in an array of particles and updates them according to the odometry measurements from the robots internal sensors. The robot outputs an odometry message which gives an approximate position [x, y, θ] in a global frame. The motion model takes the difference between the current position and the previous position to get [dx, dy, θ] in the odometry frame.Figure 1A: Odometry Frame Then the differences in position are rotated by the negative of its angle in odometry coordinates.
Figure 1B: Rotation Between Odometry and Car Frames
Figure 1C: Change in Position in the Car Frame
Then noise is added to each particle randomly using a normal distribution. We will tune the noise values later.
Figure 1D: Noise Calculation
Then the differences in position are rotated to map coordinates according to each particle's angle.
Figure 2A: Rotation Between Car and Map Frames
Figure 2B: Change in Position in the Map Frame
Then each change in position is added to its particle.
Sensor Model (Nada)
Once we had determined particle positions with the motion model, the sensor model used LIDAR data to filter particles based on probability. In this way, we were able to use our LIDAR sensor data and the robot's current particle distribution, and compute the probability of receiving our LIDAR readings given the car location denoted by each particle in our motion model distribution. We then used these probabilities to update particle weights and determine the most likely car position. We calculated each particle's likelihood based on four factors: the probability of detecting a known obstacle in the map, the probability of a short measurement, the probability of a very large/missed measurement, and the probability of a random measurement. These probabilities were defined as followed:Figure 3A: Calculating p_hit The probability of detecting a known obstacle in the map was represented as a gaussian distribution centered around the ground truth distance between the hypothesis pose and the nearest map obstacle.
Figure 3B: Calculating p_short The probability of a short measurement was represented as a downward sloping line as the ray gets further from the robot. This could happen if an object that was not accounted for in the map was detected by the robot before a wall was.
Figure 3C: Calculating p_max The probability of a very large measurement was represented as a large spike in probability at the maximal range value. This would occur if the ground truth distance was larger than the maximum LIDAR reading distance.
Figure 3D: Calculating p_rand The probability of a random measurement was represented by a small uniform value.
From here, we mixed these four distributions by a weighted average as follows:
Figure 4: Calculating p_total To find the total probability of any given particle denoting the robot's actual position, we simply added up all the individual probabilities, each multiplied by some factor such that the factors summed to one.
Precomputing the Sensor Model (Nada)
In order to speed up computation, we precomputed a discretized sensor model table that we could use to simply look up any values for z_t and z_t*. To do this, we computed all p_total values for all combinations of z_t and z_t* in the range of 0 to z_max, incrementing z_t and z_t* by 0.1 each time. In doing this, we were able to simply look up any probability given a z_t and z_t* value, which sped up our computation significantly.Figure 5: Probability Distribution of Precomputed Model For any given ground truth distance (shown at the green line), a cross section like this one showed us the probability distribution of reading some measured distance from the LIDAR scan.
Once we had these cross sections, we were able to create the entire lookup by creating the cross section for each given ground truth distance, and normalizing each distribution. Here you can see what this probability distribution looked like.
Figure 6: Probability Distribution This probability distribution shows all combinations of z_t and z_t*.
Applying the Sensor Model (Nada)
Once we had a precomputed lookup table, we could take in some particles from the motion model as well as the LIDAR observations, and use this data to find the likelihood of each motion model particle accurately denoting the robot's position on the map. From here, we were able to choose the higher likelihood particles and update our pose estimate based on those particles.Particle Filter (Andrew)
The particle filter combines these two models to produce accurate estimates of the car's position relative to the map. It does so by initializing an array of particles which are randomly distributed around the car's initial position to account for uncertainty in this pose. It then applies to motion model to these particles for every odometry update it receives which moves each particle to the position it would be in after driving that distance. This causes the particles to spread out, simulating the increasing uncertainty in the car's position. This spread is narrowed by the sensor model, which is called every fifth LIDAR scan that arrives. The sensor model uses a ray tracing algorithm combined with the probability table discussed above to determine how likely each particle is given the measured positions of the walls. Each particle is assigned a probability based on this output, and a new set of particles is drawn from this distribution to reduce the number of particles that have diverged too far from the true position. The robot's position is assumed to be the mean of the positions of these particles, while its orientation is determined by taking the mean of circular quantities of the particles.Experimental Evaluation (Andrew & Eric)
In Simulation
We first tested the particle filter on the simulated car. We created a copy of the car's odometry data with noise added and attempted to localize given this noise. The noise chosen included a random factor with mean 1 and SD 0.01 multiplied onto the odometry vector as well as additive noise with a mean of 0 and an SD of 0.01. This appeared to recreate the noise effects we saw on the real car at our chosen driving speed of 1 m/s. We used the simulation to tune the robot's free parameters, such as motion model noise and sensor model evaluation frequency. The second parameter was of particular interest as we found that it had a significant effect on the performance of the filter. To find the best value, we recorded a driven route through the map and compared the position output by the filter to the real position for a variety of frequencies. These results are shown below.Figure 7: Simulated Localization with i = 1
Figure 8: Simulated Localization with i = 2
Figure 9: Simulated Localization with i = 5
Figure 10: Simulated Localization with i = 100
As shown by these graphs, as the time between successive updates from the sensor model increases the maximum error increases and the minimum error decreases. The happens because the racecar is relying on the motion model for a longer period of time and the predicted location tends to drift away from the actual location. When the sensor model is used to then resample the particles, it has a wider range of options to choose from allowing it to make a better prediction of where the car actually is. While this change found that this only slightly improves the results in the simulation, it does a fantastic job on the actual robot. The reason for this is that the racecar does not have gaussian noise like we added to the simulation. The racecar tends to drift to the left in real life and running the motion model for longer allows the particles to spread out a bit more before they are collapsed by the sensor model resampling.