The Kalman Filter: Continuous State Tracking

Unlike Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs) which often deal with discrete probability states (e.g., "Raining" vs "Not Raining"), a Linear Dynamical System using a Kalman Filter tracks continuous variables (e.g., position, velocity, temperature).

The Kalman Filter represents uncertainty using Gaussian (normal) distributions, defined by a mean (the most likely estimate, x) and a variance/covariance (the uncertainty, P).

The filter operates in a continuous two-step loop:

1. The Predict Step (Time Update)

The filter uses the laws of physics or a known mathematical model to guess where the system will be next. Because no model is perfect, this step injects Process Noise (Q), which always increases our uncertainty (P).

2. The Update Step (Measurement Update)

The system takes a reading from a sensor. Sensors are inherently flawed, so this measurement has its own Measurement Noise (R). The filter calculates the Kalman Gain (K) to figure out how much to trust the sensor versus its own prediction.

Note on Tuning: As mentioned in the lecture notes, if Q is high, the filter trusts the sensors more and becomes "jumpy." If R is high, it trusts its own predictions more and becomes "sluggish."


A 1D Numerical Example

Imagine a robot moving in a straight line. We want to track its position along an x-axis.

Time Step 1 (t=1):

Step 1: Predict

The robot was at 0 and moves at 1 m/s, so we predict it is now at 1.0. We add the process noise to our uncertainty.

Step 2: Measure

The robot's GPS sensor takes a reading. Let's say it reads a slightly inaccurate value of 1.2.

Step 3: Update (Calculate Kalman Gain)

We calculate K to weigh our prediction (1.0) against the measurement (1.2).

Step 4: Update (Final State and Uncertainty)

Now we calculate the final, optimized position estimate and update our confidence.

The Result: The filter cleverly blended the prediction (1.0) and the sensor reading (1.2) to arrive at 1.1375. Crucially, as highlighted in the lecture snippet, observe how the uncertainty P dropped dramatically from 1.0 down to 0.34. Even though moving forward added process noise, blending the two independent sources of information significantly increased the filter's overall confidence.


1. The Intuition: Overlapping Certainty

The lecture begins by visually explaining why the Kalman Filter works using Probability Density Functions (Gaussian curves).

2. The Scalar Kalman Filter & The Gain (K)

The presentation mathematically dissects the 1D (scalar) version of the filter to explain the behavior of the Kalman Gain (KG or K).

The formula for the gain is defined as the ratio of the Estimate Error (Est) to the total error (Estimate Error + Measurement Error, Emes):

K=EstEst+Emes=11+EmesEst

This creates two important extreme bounds for how the filter behaves:

3. The Matrix Kalman Filter

Real-world robotics rarely operate in 1D. A robot needs to track position and velocity across X and Y axes simultaneously. To do this, the scalar equations are upgraded to matrices.

The Prediction Step:

The Update Step:

4. Designing a Filter using Kinematics

The lecture concludes by showing how to build the A and B matrices using standard Newtonian physics.

If you are tracking 1D distance and velocity, the state matrix is x=[xx˙]. Using the kinematic equation x=x0+vt+12at2, the matrices are derived as follows:

By plugging these physically derived matrices into the Kalman equations, the algorithm can accurately track a moving object over time.