This lecture shifts from abstract learning concepts into applied robotics, specifically how a robot figures out where it is in the world.

Outline

The Problem of Uncertainty
Odometric Error Model

Part 1: The Problem of Uncertainty

The core theme of the lecture is summarized by a quote from G.K. Chesterton: "There is only one thing certain and that is that nothing is certain". A robot can never know its exact location because it constantly deals with two types of noise:

See more on the types of noises here

Part 2: Odometric Error Model (مش علينا)

When a robot moves, it uses "odometry" to estimate its position using internal sensors (like measuring how many times its wheels have turned). The position is tracked using x and y coordinates, along with its orientation angle θ.

However, because of the effector noise mentioned above, small errors in the rotation of the right wheel (ΔSr) and left wheel (ΔSl) cause the robot's calculated position to drift from its actual position over time. The mathematical error model calculates this drift by factoring in the distance traveled (Δs) and the angular error (θerror) .

Part 3: Probabilistic Map-Based Localization

Because odometry errors accumulate endlessly, pure deduction fails. Instead, the robot uses probabilistic robotics: it calculates the probability that it is in a given configuration. It combines data from:

To process this, the lecture introduces the Markov Technique. This method allows the robot to track multiple, completely disparate possible locations simultaneously .

To do this, the robot's continuous world must be chopped up into a discrete map. This is usually done using either a Topological Graph (nodes and connecting lines, like a subway map) or a Geometric Grid (a 2D array of square cells) . The tighter the discrete grid resolution, the more memory the robot requires.

Part 4: The Sense-Move Cycle & Numerical Examples

The robot navigates by continuously looping a perception rule based on Bayes' Theorem: P(B|A)=P(B)P(A|B)P(A) . Every time the robot takes an action, the probabilities shift.

The Colored Door Example: Imagine a robot walking past a row of green and red doors. Its color sensor is imperfect: it guesses correctly 60% of the time, and guesses the wrong color 20% of the time .

  1. If the robot senses "red", it updates its belief for every single door. It multiplies its prior belief by the likelihood of seeing red (0.6 for actual red doors, 0.2 for green doors) .

  2. It then normalizes the numbers (dividing by the total sum of all probabilities, e.g., 0.36) so that the total probability across all doors equals exactly 1 (100%).

  3. As the robot moves 1 or 2 cells to the right and takes new color readings, the math dynamically shifts the highest probability mass to the robot's true location .

The 2D Grid Example: This same logic applies to a 2D map. If a robot is dropped into a 16-cell (4x4) room, its initial state is totally unknown, meaning every single cell has a 1/16 probability of containing the robot . As the robot moves and senses walls, it overlays probability matrices (Prediction, Correction, and Correlation steps) to filter out impossible cells until it finds itself .

See the numerical examples solution here

Part 5: Algorithm Summary

The entire localization process runs endlessly in a simple pseudo-code loop , summarized in four steps :

  1. Prediction Step: The robot updates its belief state P(x) using its internal proprioceptive sensors (guessing where it moved).

  2. Correction Step: The robot takes a reading with its exteroceptive sensors to calculate a likelihood, like p(x|red).

  3. Update Step: The robot mathematically merges the Prediction and Correction data.

  4. Normalization Step: The robot divides the raw numbers by the total sum so that the system remains a valid probability distribution.