Post on 23-Jan-2021
The Kalman Filteran introduction
© Dave Campbell 2009
Friday, June 12, 2009
Kalman Filter
an efficient recursive filter that estimates the state of a dynamic system from noisy data.
also called linear quadratic estimation
Friday, June 12, 2009
Cosmoworld roller coaster in Yokohama Japan
Friday, June 12, 2009
http://www.rcdb.com/ig1290.htm?picture=2
Friday, June 12, 2009
Friday, June 12, 2009
We wish to estimate the state
xt is the actual value taken by the system which is governed by a linear stochastic difference equation with control inputs
The stochastic term (process noise) is
We don’t actually observe xt instead, for some matrix H and measurement noise vt we observe zt=Hxt+vt
Discrete Kalman Filterx ∈ℜn
xt = αxt−1 + βut−1 + wt−1
u ∈ℜ
wt
Friday, June 12, 2009
I observe (with a lot of noise) Z=D2X+err
I need the H matrix: zt=Hxt+vt
Then I will need to describe the update:α
xt = αxt−1 + βut−1 + wt−1
Friday, June 12, 2009
Model:
Where measurement noise
And process noise
Parameters (all assumed constant for now)
is an n x n matrix and ß is n x l
H is n x n and may be sparse and/ or rank deficient
wt ~ N(0,Ω)
vt ~ N(0,Σ)
xt = αxt−1 + βut−1 + wt−1
zt = Hxt + vt
α
Friday, June 12, 2009
Purveyor of answers to new questions: like google but
for questions that haven’t yet been asked
wolfram alpha
Friday, June 12, 2009
The filtering idea:
1. Take a guess about the next state step ahead xt+1 (Prediction)
2. Make an observation zt+1
3. Update the estimate for xt+1 (Correction)
4. return to step 1
This makes it suited to online state estimation
Model: xt = αxt−1 + βut−1 + wt−1
zt = Hxt + vt
Friday, June 12, 2009
Model: xt = αxt−1 + βut−1 + wt−1
zt = Hxt + vt
We have estimates of xt before ( ) and after ( ) observing zt accounting for the past.
We estimate using and zt:
best new = best old + weight(new obs - old guess at obs)
residual or innovation: (new obs - new obs guess before observed)
x̂t−
x̂t
x̂t−x̂t
x̂t = x̂t− + K(zt − Hx̂t
− )
Friday, June 12, 2009
xt = αxt−1 + βut−1 + wt−1
zt = Hxt + vtModel:
The a priori state estimate is a prediction based on out most recent observation and state estimate
We estimate using and ut-1:
We just use our best current information and update by assuming no noise
x̂t− x̂t−1x̂t− = α x̂t−1 + But−1
x̂t−1
Friday, June 12, 2009
K is the gain or blending factor that minimizes posteriori covariance.
a posteriori estimate error
Find K by substituting
into
Differentiate and set = 0, then solve for K
xt = αxt−1 + βut−1 + wt−1 zt = Hxt + vt
et = xt − x̂t
Δ t = E(etetT )
x̂t = x̂t− + K(zt − Hx̂t
− ) Δ t = E(etetT )
x̂t− = α x̂t−1 + But−1 x̂t = x̂t
− + K(zt − Hx̂t− )
Friday, June 12, 2009
a priori estimate error
a priori estimate error covariance
Then
et− = xt − x̂t
−
Δ−t = E(et
−[et− ]T)
Kt = Δ t−HT(HΔ t
−HT + Σ)−1
xt = αxt−1 + βut−1 + wt−1 zt = Hxt + vt
x̂t− = α x̂t−1 + But−1 x̂t = x̂t
− + K(zt − Hx̂t− )
Friday, June 12, 2009
Model: xt = αxt−1 + βut−1 + wt−1
zt = Hxt + vt
We have estimates of xt before ( ) and after ( ) observing zt accounting for the past.
We estimate using and zt:
best new = best old + weight(new obs - old guess at obs)
residual or innovation: (new obs - new obs guess before observed)
x̂t−
x̂t
x̂t−x̂t
x̂t = x̂t− + K(zt − Hx̂t
− )
Friday, June 12, 2009
a priori estimate error
a priori estimate error covariance
Then
as -> 0 then Kt->H-1 weighs the resids more heavily (we trust the observation more)
as ->0 Kt->0 and the prediction is considered very good (ignore the zt)
et− = xt − x̂t
−
Kt = Δ t−HT(HΔ t
−HT + Σ)−1Δ−t = E(et
−[et− ]T)
Σ
x̂t = x̂t− + K(zt − Hx̂t
− )
Δ t−
Friday, June 12, 2009
Assume that K and are time varying and unknown
We need to estimate K and project (and update) forward
This just comes from var(xt-)
Δ
ΔΔ t
− = αΔ t−1αT +Ω
x̂−t = E(αxt−1 + βut−1 + wt−1)
Δ−t = E xt − x̂
−t⎡⎣ ⎤⎦ xt − x̂
−t⎡⎣ ⎤⎦T( )
Friday, June 12, 2009
After projecting forward we update it when more recent information becomes available:
This is just the variance of the updated estimate (note that zt is an observed value not a rv)
Δ t
Δk = (I − KtH)Δ t−
xt = αxt−1 + βut−1 + wt−1
x̂t = x̂t− + K(zt − Hx̂t
− )
Friday, June 12, 2009
The Kalman Filter maintains the 1st and 2nd moments of the state distribution
If process noise and observation noise are normal then the filter is actually Bayesian:
E(xt ) = x̂tE (xt − x̂t )(xt − x̂t )[ ] = Δ t
P(xt | zt ) ~ N(x̂t ,Δ t )
Friday, June 12, 2009
The estimate is recursive and only uses the most recent information and the future prediction.
It doesn’t deal with all of the data directly.
But in this basic Kalman filter:
we need estimates of and in advance. Also they should be constant
Σ Ω
Friday, June 12, 2009
Kalman Filter
This is the ‘linear regression’ of the measurement error and process noise state space model.
Friday, June 12, 2009
Back to Matlab to see how it turns out
Friday, June 12, 2009
Extended Kalman FilterFor nonlinear systems
The basic idea:
linearize the system using a Taylor expansion around the current estimate
basics: http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html
Friday, June 12, 2009