# Kalman filtering

## Recommended Posts

Deliverance    387

##### Share on other sites
inferno82    152
The 1D Kalman Filter is quite simple actually. You will begin with your state vector

[x, v]

in this case you are estimating the position x and v is your velocity in either direction.

The filter is broken into two stages...the prediction and the update. The prediction stage gives the a priori estimate based on the previous iteration. The estimate is computed as

x_k = F * x_k-1 + G

where F = [1 deltaT] and G = [0.5 * deltaT^2 deltaT] * sigma. Sigma is added noise. The covariance matrix is predicted as

p_k = F * p_k_1 * FT + Q

where FT is the transpose of F and Q = G * GT.

Given a new measurement for the object being tracked the predicted state is updated as follows:

(1) Compute the residual measurement: y = z - H * x_k, where z is your new measurement, H = [1 0], and x_k is the predicted value from above.

(2) Compute the residual covariance: S = H * p_k * HT + R, where R is measurement noise.

(3) Compute the kalman gain: K = p_k * HT * inv( S ), where inv() computes the inverse of the S.

(4) Compute the new estimate of the object:

x = x_k + K * y
p = ( I - K * H ) * p_k, where I is the identify matrix.

Hope this helps a little. The wiki page for the Kalman Filter gives a small example as well for a truck, which is basically the same example I have just given. If you have more specific questions as to which part you don't understand or any implementation questions, post them here.

- me

##### Share on other sites
Emergent    982
1. Is your state space 1-dimensional, or is it that you have a particle moving in one dimension, governed by Newton's Laws of motion (as in inferno82's example)?

2. Do you want a continuous-time filter, or a discrete-time one?

3. In any case, I think the Kalman filter is best understood as a Bayesian estimator. You start with a Gaussian prior over the state space (specified by a mean -- the "estimate" -- and a covariance matrix). Then at each step, conceptually, you multiply your prior by the likelihood function -- also a Gaussian -- corresponding to your observation. Since the product of Gaussians is a Gaussian, you just get a new Gaussian distribution (encoded by a different mean and covariance) which becomes the prior for your next observation. That's really all that's going on.