EE363
Winter 2005-06
Lecture 8 The Extended Kalman lter
Extended Kalman lter
Nonlinear ltering
Linearization and random variables
81
Nonlinear ltering
nonlinear Markov model: x(t + 1) = f (x(t), w(t)), y(t) = g(x(t), v(t))
f is (possibly nonlinear) dynamics function g is (possibly nonlinear) measurement or output function w(0), w(1), . . . , v(0), v(1), . . . are independent even if w, v Gaussian, x and y need not be
nonlinear ltering problem: nd, e.g., x(t|t1) = E(x(t)|y(0), . . . , y(t1)), x(t|t) = E(x(t)|y(0), . . . , y(t))
general nonlinear ltering solution involves a PDE, and is not practical
The Extended Kalman lter 82
Extended Kalman lter
extended Kalman lter (EKF) is heuristic for nonlinear ltering problem often works well (when tuned properly), but sometimes not widely used in practice based on linearizing dynamics and output functions at current estimate propagating an approximation of the conditional expectation and covariance
The Extended Kalman lter
83
Linearization and random variables
consider : Rn Rm suppose E x = x, E(x x)(x x)T = x, and y = (x) if x is small, is not too nonlinear, y y = () + D()(x x) x x x y N ((), D()xD()T ) x x gives approximation for mean and covariance of nonlinear function of random variable: y (), x x x y D()xD()T
if x is not small compared to curvature of , these estimates are poor
The Extended Kalman lter 84
a good estimate can be found by Monte Carlo simulation: y y mc y 1 N 1 = N
N i=1 N
(x(i))
i=1 (i) mc (i) mc T
(x ) y
(x ) y
where x(1), . . . , x(N ) are samples from the distribution of x, and N is large another method: use Monte Carlo formulas, with a small number of nonrandom samples chosen as typical, e.g., the 90% condence ellipsoid semi-axis endpoints x(i) = x vi, x = V V T
The Extended Kalman lter
85
Example
x N (0, 1), y = exp(x) (for this case we can compute mean and variance of y exactly)
exact values linearization Monte Carlo (N = 10) Monte Carlo (N = 100) Sigma points (x = x, x 1.5x)
e1/2
y = 1.649 1.000 1.385 1.430 1.902
y e2 e = 2.161 1.000 1.068 1.776 2.268
The Extended Kalman lter
86
Extended Kalman lter
initialization: x(0| 1) = x0, (0| 1) = 0 measurement update linearize output function at x = x(t|t 1): C V = = g ((t|t 1), 0) x x g g ((t|t 1), 0)v ((t|t 1), 0)T x x v v
measurement update based on linearization x(t|t) = x(t|t 1) + t|t1C
T
Ct|t1C + V
1
...
t|t = t|t1 t|t1C T Ct|t1C T + V
The Extended Kalman lter
. . . (y(t) g((t|t 1), 0)) x
Ct|t1
87
time update linearize dynamics function at x = x(t|t): A = W = f ((t|t), 0) x x f f ((t|t), 0)w ((t|t), 0)T x x w w
time update based on linearization x(t + 1|t) = f ((t|t), 0), x t+1|t = At|tAT + W
replacing linearization with Monte Carlo yields particle lter
replacing linearization with sigma-point estimates yields unscented Kalman lter (UKF)
The Extended Kalman lter
88
Example
p(t), u(t) R2 are position and velocity of vehicle, with (p(0), u(0)) N (0, I) vehicle dynamics: p(t + 1) = p(t) + 0.1u(t), w(t) are IID N (0, I) measurements: noisy measurements of distance to 9 points pi R2 yi(t) = p(t) pi + vi(t), vi(t) are IID N (0, 0.32)
The Extended Kalman lter 89
u(t + 1) =
0.85 0.15 0.1 0.85
u(t) + w(t)
i = 1, . . . , 9,
EKF results
EKF initialized with x(0| 1) = 0, (0| 1) = I, where x = (p, u) pi shown as stars; p(t) as dotted curve; p(t|t) as solid curve
9 8 7 6 5
p2 PSfrag replacements
4 3 2 1 0 1 3
p1
The Extended Kalman lter 810
Current position estimation error
p(t|t) p(t) versus t
4.5 4 3.5 3 2.5 2 1.5 1 0.5 0
PSfrag replacements
20
40
60
80
100
120
140
160
180
200
The Extended Kalman lter
811
Current position estimation predicted error
((t|t)11 + (t|t)22)
1.6
1/2
versus t
1.4
1.2
0.8
0.6
PSfrag replacements
0.4
0.2
100
200
300
400
500
600
700
800
900
1000
The Extended Kalman lter
812