A very short publication introducing the IMM filter. It describes a system with multiple models, and in each model, there is a state. The model transition is a ergodic Markov chain with probability matrix $\Pi$, and state transition in each mode is a linear dynamic system. The state transition and model transition are independent.

# Kalman filter model

Discrete-time state representation of a linear system:

where $X_k$ is the state estimate; $\Phi_k$ is a state transition matrix from $k$ to $k+1$; $w_k$ is process noise, assumed to be white Gaussian. Observations are assumed to be linear w.r.t. state estimate:

where $H_k$ is the matrix relating the state to observation; $v_k$ the observation noise, assumed to be white Gaussian as well, and not correlated with $w_k$. Kalman filter is the iterative provess that provides the minimum mean squared error solution:

where

• $\tilde{X}, \hat{X}$: predicted and filtered quantities respectively
• $X$: state estimate
• $P$: covariance matrix (of error of $X$)
• $\Phi$: discrete time transition matrix
• $Q$: process noise covariance matrix
• $R$: observation noise covariance matrix
• $K$: Kalman gain
• $I$: identity matrix
• $y_k$: observation used to update the state estimate

# Multiple filter model

Interacting multiple model (IMM) algorithm: Combining state hypotheses from multiple filter models to get a better state estimate of targets with changing dynamics.

Example using two models: State estimate of each model, $\tilde{X}^1, \tilde{X}^2$, and model probability $\tilde{\mu}$ are input to state update. The state update gives out $\hat{X}^1$ and $\hat{X}^2$ together and they are separately corrected to $\tilde{X}^1$ and $\tilde{X}^2$ for next step. At the same time, the likelihood of each model are used to update the model probability $\tilde{\mu}$.

## IMM Algorithm:

Model state estimates and covariances for model $j$ at time $k$:

with

Here, $\mu^i$ is the probability that the system is in model $i$; $p^{ij}$ is the a priori probability for switching from model $i$ to model $j$; $\bar{\psi}^j$ a normalization constant; $\hat{X}^{0j}$ and $\hat{P}^{0j}$ are mixed state estimate and covariance for each filter model.

Assume $m_0$ is a vector of observation for the current update and $\tilde{m}^j$ is the predicted observation computed from predicted tract state for filter model $j$. Then $Z^j = m_0 - \tilde{m}^j$ is the innovation. The covariance matrix of $Z^j$ is $\tilde{S}^j = H^j\tilde{P}^{0j}(H^j)^T+R$. The probability that the system is in model $j$ is given by

The model probabilities after update are $\hat{\mu}^j = \frac{1}{c}\Lambda^j\bar{c}^j$ with $\bar{c}^j$ a normalization vector to maintain total probability of 1 and $c$ a normalization constant.

Finally, combine the state estimates:

# Notes

A Swedish lecture note describes the above using fewer matrix notation:

Assume the Markov system has $N_r$ models, and the current model of the system is denoted by $r_k\in{1,2,\cdots,N_r}$. The Markov transition probability matrix is

Each model has a different dynamic:

Given measurements $y_{0:k}$, we can find the posterior distribution of base state $x_k$, and the posterior model probabilities $\mu_k^i$:

The IMM algorithm is as follows:

Suppose we have the statistics of historical state estimates, covariance matrix of estimates, and model probability for each state, up to time $k-1$:

First the mixing. We update the model probabilities for all state transitions, i.e. probability of a state in next time step is the probability of a state at this moment multiplied by the transition probability:

And the state estimate is, similarly, average of all state estimate weighted by transition probability to this state; so as the covariances:

Then the model-matched prediction update. For each model $i$, calculate the predicted state estimate and covariance from the mixed estimates:

Afterwards, the model-matched measurement update. For each model $i$, calculate the Kalman gain and updated estimate and covariance:

We update the model probability as well:

Finally we can find the overall output estimate. This is not used in the iterative process but as an estimate for the final system state after time $k$:

So now we proceeded a single step to have historical estimates up to time $k$:

## Bibliographic data

@article{
title = "Interacting Multiple Model Algorithm for Accurate State Estimation of Maneuvering Targets",
author = "Anthony F. Genovese",
year = "2001",
journal = "Johns Hopkins APL Technical Digest",
volume = "22",
number = "4",
pages = "614--623",
}