<
Kalman Filter Implemented in Python
>
上一篇

Particle Filter Implemented in Python
下一篇

Computer Science fundamental

Kalman Filter

In many real-world applications, such as tracking the position of a moving vehicle, we rely on models to predict the state of a system. However, these predictions are never perfect due to uncertainties and noise in the system. To improve the accuracy of our state estimates, we use a method called the Kalman filtering.

The Kalman filter is an efficient recursive algorithm that estimates the state of a dynamic system from a series of incomplete and noisy measurements.

Modeling a Dynamic System

The dynamics of the system can be represented by the following equations:

\[\dot{\mathbf x} = \mathbf{Ax}\] \[\dot{\mathbf x} = \mathbf{Ax} + \mathbf w\]

Consider any inputs into the system. We assume an input $\mathbf u$, and a linear model matrix $\mathbf B$ is set to convert $\mathbf u$ into the effect on the system.

\[\dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu} + \mathbf{w}\]

Where:

These equations form the foundation for modeling dynamic systems, allowing engineers to predict and analyze the system’s behavior under varying conditions and inputs.

State Transition Equation

While continuous-time dynamic models provide theoretical insights into how system states change over time, practical applications demand more specific and actionable models to predict and manage system behaviors at specific moments. This is where the introduction of the state transition equation becomes crucial.

The state transition equation describes how the state of a system evolves over discrete time steps:

\[\mathbf x_k = \mathbf{Fx}_{k-1} + \mathbf B_k\mathbf u_k\]

Where:

This equation provides a framework for modeling the temporal evolution of a dynamic system under the influence of external inputs or controls. By adjusting F and B, engineers can simulate and analyze how different inputs affect the system’s behavior over time.

Python Implementation of Kalman Filter

In this case, we use an simple 1D position and velocity example without external control inputs, which system’s state is determined solely by its own dynamics and process noise:

\[\mathbf x_k = \mathbf{Fx}_{k-1}\]

Step 1 Given offsers of each variables in the state vector

These defines the order of the state variables in our vector $\mathbf x$

import numpy as np
# offsets of each variable in the state vector
iX = 0
iV = 1
NUMVARS = iV + 1  

Step 2 Initialize the class object instance:

class KF:
    def __init__(self, initial_x: float, initial_v: float, accel_variance: float) -> None:
        # mean of state GRV
        self._x = np.zeros(NUMVARS)
        self._x[iX] = initial_x 
        self._x[iV] = initial_v  
        self._accel_variance = accel_variance
        # covariance of state GRV
        self._P = np.eye(NUMVARS)

init initializes the Kalman filter instance variables:

Step 3 Prediction: state prediction and covariance prediction

def predict(self, dt: float) -> None:
    F = np.eye(NUMVARS)
    F[iX, iV] = dt 
    new_x = F.dot(self._x) 
    
    G = np.zeros((2, 1))
    G[iX] = 0.5 * dt**2
    G[iV] = dt 
    new_P = F.dot(self._P).dot(F.T) + G.dot(G.T) * self._accel_variance

    self._P = new_P
    self._x = new_x

1. State prediction

F is the state transition matrix, we have position and velocity to track, and to describe how the state evolves over a time step dt, $F$ is:

\[\begin{aligned} \mathbf F &= \begin{bmatrix}1&0\\0&1\end{bmatrix} + \begin{bmatrix}0&1\\0&0\end{bmatrix}\Delta t\\ &= \begin{bmatrix}1&\Delta t\\0&1\end{bmatrix} \end{aligned}\]

Apply this into $\mathbf x_k= \mathbf{Fx}_{k-1}$ to get the predict new_x, which is:

\[\begin{aligned} \mathbf x_k &=\begin{bmatrix}1&\Delta t\\0&1\end{bmatrix} \mathbf x_{k-1} \end{aligned}\]

2. Covariance prediction: In addition to predicting the state, it’s essential to predict the covariance because the covariance matrix describes the uncertainty of the state estimation. We denote the predicted covariance matrix as $P_{k+1}$. There are two components in this part:

  1. Impact of State Prediction on Covariance:

    During the state prediction (predict) phase, the current state covariance matrix is updated to the predicted state covariance matrix using the state transition matrix F and the process noise covariance matrix 𝑄. This step accounts for the uncertainty in state changes that the system model cannot fully capture.

  2. Contribution of Process Noise:

    The process noise covariance matrix Q plays a crucial role in the state prediction by describing the unmodeled dynamic changes in the system and the influence of control inputs. It directly affects the size and structure of the predicted state covariance matrix.

Based on these two factors, we predict the new state covariance matrix ( P ):

\[\mathbf{P}_{k+1} = F \mathbf{P}_k F^T + Q\]

Recall that Q describes the unmodeled dynamic changes in the system and the influence of control inputs, so it involves G and $\sigma_a^2$:

\[\mathbf{Q} = G G^T \sigma^2_a\]

To sum up:

\[\mathbf{P}_{k+1} = F \mathbf{P}_k F^T + G G^T \sigma^2_a\]

where $\sigma^2_a $ is the variance of the acceleration noise.

G This prediction involves two main components: For an object moving with constant acceleration a, the position change over a time interval dt can be expressed as: \(x = x_0 + v_0 \cdot dt + \frac{1}{2} a \cdot dt^2\)

So that, G[iX] = 0.5 * dt^2 and G[iV] = dt indicate the influence of acceleration noise on position and velocity over the time step $t$ and $G$ matrix is: The process noise influence matrix ( $G$ ) is:

\[G = \begin{pmatrix} 0.5 \cdot dt^2 \\ dt \end{pmatrix}\]

Using the G matrix, we predict the new state covariance matrix ( P ):

\[\mathbf{P}_{k+1} = F \mathbf{P}_k F^T + G G^T \sigma^2_a\]

where $\sigma^2_a $ is the variance of the acceleration noise.

Step 4 Update

The update step is crucial because it allows the Kalman filter to correct its predictions based on new measurements. Imagine you have a system (like a moving car) and you’re trying to estimate its position and speed. You use a model to predict where the car will be, but your prediction won’t be perfect due to uncertainties (like changes in speed or direction).

Every time you get a new measurement (like a GPS reading), you can use this new information to correct your estimate. This makes your estimation more accurate than just relying on predictions alone.

The update step of the Kalman filter utilizes linear algebra and probability theory to dynamically adjust the state estimate based on real-time measurement data. These mathematical formulas ensure that the filter optimally balances between the system dynamics and measurement uncertainties, providing accurate and reliable state estimation.

Thus, the update step involves correcting the predicted state estimate based on the new measurement. Here’s a detailed explanation of the update process:

  1. Calculate Measurement Residual:

    Calculate the measurement residual $y$ , which is the difference between the actual measurement $z$ and the predicted measurement based on the current state estimate

    \[y = z - H \mathbf{x}\]

    where:

    • $\mathbf{z}$ is the measured measurement.
    • $\mathbf{H}$ is the observation matrix that maps the state vector into the measurement space.
    • $\mathbf{x}$ is the predicted state vector.
  2. Calculate Residual Covariance:

    Compute the residual covariance $S$ , which combines the prediction uncertainty and measurement noise:

\[S = H \mathbf{P} H^T + R\]

where:

  1. Compute Kalman Gain:

    The Kalman Gain is a factor that determines how much you should adjust your prediction based on the new measurement. It balances the uncertainty in your prediction against the uncertainty in the measurement. If your prediction is very uncertain but your measurement is very accurate, the Kalman Gain will be higher, meaning you trust the measurement more. Calculate the Kalman gain $ K $, which determines how much the predicted state estimate is adjusted based on the measurement residual:

    \[K = \mathbf{P} H^T (H \mathbf{P} H^T + R)^{-1}\]
  2. Update State Estimate:

    Update the state estimate $\mathbf{x}$ using the Kalman gain and the measurement residual:

    \[\mathbf{x}^+ = \mathbf{x} + K y\]

    Adjust your previous state estimate by adding the product of the Kalman Gain and the residual. This step corrects the prediction by bringing it closer to the actual measurement.

  3. Update Covariance Matrix:

    Update the covariance matrix $ \mathbf{P} $ to reflect the incorporation of new measurement information:

    \[\mathbf{P}^+ = (I - K H) \mathbf{P}\]

Significance in Applications

The update step enables the Kalman filter to adaptively adjust the state estimate according to real-world measurement data, facilitating precise tracking and prediction of system states. This capability makes the Kalman filter invaluable in real-time applications such as navigation systems, robotics, and sensor data processing.

Implementation of Update in Python:

def update(self, meas_value: float, meas_variance: float):
        # y = z - H x
        # S = H P Ht + R
        # K = P Ht S^-1
        # x = x + K y
        # P = (I - K H) * P

        H = np.array([1, 0]).reshape((1, 2))

        z = np.array([meas_value])
        R = np.array([meas_variance])

        y = z - H.dot(self._x)
        S = H.dot(self._P).dot(H.T) + R

        K = self._P.dot(H.T).dot(np.linalg.inv(S))
        
        new_X = self._x + K.dot(y)
        new_P = (np.eye(2) - K.dot(H)).dot(self._P)

        self._P = new_P
        self._x = new_X

Using the KF

Now let’s test the kalman filter implementation

We now create the main.py script to demonstrates how to use the Kalman Filter to track the position and velocity of an object over time.To better observe the performance of the Kalman Filter, we need to set up the plotting environment and initialize various parameters:

1. Import Libraries and Modules and Set Plotting Environment:

Import the necessary libraries and modules. numpy is used for numerical computations, matplotlib for plotting, and KF is the Kalman Filter implementation.

import numpy as np
import matplotlib.pyplot as plt
from kf import KF 

2. Initialization and Parameter Setup

Initializing the real position and velocity of the object is necessary to simulate the actual state of the system we want to track. This provides a reference to compare against the filter’s estimates.

real_x = 0.0
meas_variance = 0.1 ** 2
real_v = 0.5

kf = KF(initial_x = 0.0, initial_v = 1.0, accel_variance = 0.1)

3. Simulation Parameters

Defining the noise in the measurements simulates real-world sensor inaccuracies. This helps in testing the filter’s ability to handle and correct noisy data.

DT = 0.1
NUM_STEPS = 1000
MEAS_EVERY_STEPS = 20

mus = []
covs = []
real_xs = []
real_vs = []

4. Simulation Loop

To emulate the dynamic behavior of the system over time and to observe how the Kalman Filter performs in real-time tracking of the object’s state, we use simulation loop. Here’s a detailed explanation of why we need the simulation loop:

for step in range(NUM_STEPS):
    if step > 500:
        real_v *= 0.9

    covs.append(kf.cov)
    mus.append(kf.mean)
    
    real_x = real_x + DT * real_v

    kf.predict(dt = DT)
    if step != 0 and step % MEAS_EVERY_STEPS == 0:
        kf.update(meas_value = real_x + np.random.randn() * np.sqrt(meas_variance), 
                  meas_variance = meas_variance)

    real_xs.append(real_x)
    real_vs.append(real_v)

5. Plotting

To visually assess the Kalman Filter’s performance in estimating dynamic system states from noisy measurements, plotting is indispensable. It provides a clear comparison between estimated and actual states, visualizes uncertainties, aids in performance evaluation, and facilitates debugging and optimization of the filter implementation.

Apart from the estimated and actual states, we also plot the Confidence Interval of the estimated position/velocity (mean ± 2 standard deviations), which helps us understand the range within which the true velocity is likely to lie with a high degree of confidence. It reflects the uncertainty in our velocity estimation due to measurement variations and model assumptions. Plotting this interval alongside the estimated velocity (mu[1]) allows us to visualize the accuracy and reliability of our velocity predictions throughout the simulation.

plt.subplot(2, 1, 1)
plt.title('Position')
plt.plot([mu[0] for mu in mus], 'g')
plt.plot(real_xs, 'b')
plt.plot([mu[0] - 2*np.sqrt(cov[0, 0]) for mu, cov in zip(mus, covs)], 'r--')
plt.plot([mu[0] + 2*np.sqrt(cov[0, 0]) for mu, cov in zip(mus, covs)], 'r--')

plt.subplot(2, 1, 2)
plt.title('Velocity')
plt.plot([mu[1] for mu in mus], 'g')
plt.plot(real_vs, 'b')
plt.plot([mu[1] - 2*np.sqrt(cov[1, 1]) for mu, cov in zip(mus, covs)], 'r--')
plt.plot([mu[1] + 2*np.sqrt(cov[1, 1]) for mu, cov in zip(mus, covs)], 'r--')
plt.show()

Description of the image

Performance Evaluation:

Conclusion

The Kalman filter demonstrates strong performance in tracking the position of the object, with predictions closely matching the true values. The confidence interval for position estimation remains narrow, indicating high confidence in these predictions, except for fluctuations near zero where variability increases. For velocity, while the Kalman filter accurately tracks changes, there are some deviations observed during rapid changes. The confidence interval for velocity estimates is high, suggesting lower uncertainty in velocity predictions. Same to the position prediction, around zero the fluctuations are more pronounced. Overall, the Kalman filter effectively reduces noise and provides reliable state estimation for dynamic systems.

-->
Top
Foot