0% found this document useful (0 votes)
17 views34 pages

Session 13d - Kalman Filter

The Kalman Filter is a recursive algorithm that provides optimal estimates of a signal in the presence of Gaussian noise, minimizing the Mean Square Error using a state space model. It operates in two main stages: prediction of the next state and correction based on measurements. The filter assumes that the internal state of a linear system can be estimated and is particularly effective in systems where noise is present in both the process and measurements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views34 pages

Session 13d - Kalman Filter

The Kalman Filter is a recursive algorithm that provides optimal estimates of a signal in the presence of Gaussian noise, minimizing the Mean Square Error using a state space model. It operates in two main stages: prediction of the next state and correction based on measurements. The filter assumes that the internal state of a linear system can be estimated and is particularly effective in systems where noise is present in both the process and measurements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Kalman Filter

KALMAN FILTER
Recursive optimal estimate of a signal in So if we have a dynamics equation, e.g.,
presence of noise 𝑣 𝑛 + 1 = 𝑣 𝑛 + Δ𝑡 ⋅ 𝑎 𝑛
▪ Minimizes Mean Square Error We can estimate the next state given the
▪ Uses state space model current state, right?
▪ Optimal when system is contaminated
only by gaussian noise

Assumes that the internal state 𝐱 of a linear


system 𝐀 can be estimated 𝐱ො True Trajectory

Two main stages


▪ Prediction: The next dynamics state
▪ Correction: According to measurements
KALMAN FILTER
Recursive optimal estimate of a signal in So if we have a dynamics equation, e.g.,
presence of noise 𝑣 𝑛 + 1 = 𝑣 𝑛 + Δ𝑡 ⋅ 𝑎 𝑛
▪ Minimizes Mean Square Error We can estimate the next state given the
▪ Uses state space model current state, right? Hmmm not really!
▪ Optimal when system is contaminated
only by gaussian noise Sensors are not perfect, they are prune
to errors and noise
Assumes that the internal state 𝐱 of a linear
system 𝐀 can be estimated 𝐱ො True Trajectory

Two main stages


▪ Prediction: The next dynamics state
▪ Correction: According to measurements Measured Trajectory
KALMAN FILTER
Recursive optimal estimate of a signal in Kalman Filter infers parameters of
presence of noise interest from indirect, inaccurate, and
▪ Minimizes Mean Square Error uncertain observations (noisy)
▪ Uses state space model
▪ Optimal when system is contaminated
only by gaussian noise

X Acceleration
Assumes that the internal state 𝐱 of a linear
system 𝐀 can be estimated 𝐱ො

Two main stages


▪ Prediction: The next dynamics state Time (s)
▪ Correction: According to measurements
KALMAN FILTER
KALMAN FILTER
Centroid
estimation
KALMAN FILTER
The statistical view of a measurement 𝐳 can KF assumes that 𝐳 is drawn from a
be represented in the following Figure Gaussian probability density function

1 1
−2 𝐳−𝝁𝑧 𝑇 𝚺 −1 𝐳−𝝁𝑧
𝑓 𝐳 = 𝑒
2𝜋 𝑚 𝚺z

We need only two parameters that


describe the measure
▪ 𝛍𝑧 = 𝔼 𝑥 : The expected value of z
(most probable value to see)
▪ 𝚺z = 𝔼 𝐳 − 𝜇 2 : Variance of z (how
scattered are measurements)
KALMAN FILTER
Recursive optimal estimate of a signal in The Key is to minimize the Covariance of
presence of noise the error:
▪ Minimizes Mean Square Error
▪ Uses state space model Variance across two random signals
▪ Optimal when system is contaminated 𝔼 𝑥 𝑛 − 𝜇𝑥 𝑦 𝑛 − 𝜇𝑦
only by gaussian noise Reliability

Assumes that the internal state 𝐱 of a linear Correction

system 𝐀 can be estimated 𝐱ො

Two main stages


▪ Prediction: The next dynamics state
▪ Correction: According to measurements
KALMAN FILTER
Recursive optimal estimate of a signal in Notation:
presence of noise 𝐱 ∈ ℝ𝑝 : State vector (may not seen)
▪ Minimizes Mean Square Error 𝐳 ∈ ℝ𝑚 : Measurement vector (what we see)
𝐮 ∈ ℝ𝑝 : Control vector (external signal)
▪ Uses state space model
𝐁 ∈ ℝ𝑝×𝑝 : Control matrix (can be zero)
▪ Optimal when system is contaminated 𝐀 ∈ ℝ𝑝×𝑝 : Dynamics system matrix, state-
only by gaussian noise transition model from time 𝑛 to 𝑛 + 1
𝐇 ∈ ℝ𝑚×𝑝 : Measurement matrix, how
Assumes that the internal state 𝐱 of a linear measures are acquired
system 𝐀 can be estimated 𝐱ො 𝐰 ∈ ℝ𝑝 : Process noise
𝐐 ∈ ℝ𝑝×𝑝 : Process covariance matrix
Two main stages 𝐯 ∈ ℝ𝑚 : Measurement noise
▪ Prediction: The next dynamics state 𝐑 ∈ ℝ𝑚×𝑚 : Measurement covariance matrix
𝚺 ∈ ℝ 𝑝×𝑝 : Covariance of the model error
▪ Correction: According to measurements
𝐊 ∈ ℝ𝑝×𝑚 : Kalman Gain
KALMAN FILTER
Recursive optimal estimate of a signal in The state model (Dynamical System)
presence of noise 𝐱 n + 1 = 𝐀𝐱 𝑛 + 𝐁𝐮 𝑛 + 𝐰 n
▪ Minimizes Mean Square Error
▪ Uses state space model ▪ The noise 𝐰 n with covariance 𝐐 n
▪ Optimal when system is contaminated 𝐐 n =𝔼𝐰 𝑛 𝐰 n
only by gaussian noise
The measurement model
Assumes that the internal state 𝐱 of a linear 𝐳 n = 𝐇𝐱 𝑛 + 𝐯 n
system 𝐀 can be estimated 𝐱ො
▪ The noise 𝐯 n with covariance 𝐑 n
Two main stages
𝐑 n =𝔼𝐯 𝑛 𝐯 n
▪ Prediction: The next dynamics state
▪ Correction: According to measurements
The Problem
The Kalman filter seeks to minimize the
covariance of the error 𝑒 𝑛 between the
estimation 𝐱ො 𝑛 and the true value 𝐱 𝑛 even
when 𝐱 𝑛 is not known in advance
𝑁

min ෍ 𝔼 𝑒2 𝑛
𝑖=0
𝑠. 𝑡 𝐱ො 𝑛 = 𝐱ො − 𝑛 + 𝐾 𝐳 𝑛 − 𝐇ො𝐱 − 𝑛

where 𝐾 is the Kalman gain which provides a


way to weight how much impact the prediction
and measurement have in the estimation
DERIVATION OF THE KALMAN FILTER EQNS
Given two random variables 𝐱 ∈ ℝ𝑛 and Theorem. The optimal linear prediction of
𝐳 ∈ ℝ𝑚 where we know (analytically or 𝐱 given a measurement 𝐳0 is given by
experimentally) their mean 𝜇𝐱 = 𝔼 𝐱 ,
𝜇𝐳 = 𝔼 𝐳 and the covariance matrix 𝚺 𝑝 𝐳 = 𝚺𝑥𝑧 𝚺𝑧−1 ⋅ 𝐳0 − 𝜇𝐳 + 𝜇𝐱

𝚺𝐱 𝑻
𝚺𝐱𝒛 𝑛+𝑚 × 𝑛+𝑚 While the posteriori error 𝐞 = 𝐱 − 𝑝(𝒛0 )
𝚺= ∈ℝ
𝚺𝐱𝒛 𝚺𝐳 has a covariance matrix

𝚺𝑒 = 𝔼 𝐞𝐞𝑇 = 𝚺𝑥 − 𝚺𝑥 𝚺𝑧−1 𝚺𝑥𝑧


𝑇
DERIVATION OF THE KALMAN FILTER EQNS
Given a linear discrete-time system The Kalman filter assumes decorrelation
described by the matrices 𝐀, 𝐁, and 𝐇, between the noises and the measurement
the KF proposes two models: and state vectors, i.e.,

Open loop: 𝔼 𝐱 𝑛 𝐰𝑇 𝑛 = 0, 𝔼 𝐳 𝑛 𝐯 𝑇 𝑛 =0
𝐱 𝑛 + 1 = 𝐀𝐱 𝑛 + 𝐁𝐮 𝑛 + 𝐰(𝑛)
Feedback loop: 𝐳 𝑛 = 𝐇𝐱 𝑛 + 𝐯(𝑛)

▪ 𝐰 𝑛 ∈ ℝ𝑛 : process noise vector with mean


𝔼𝐰 = 0 and cov 𝔼 𝐰 𝑛 𝐰 𝑇 (𝑛) = 𝐖(𝑛)
▪ 𝐯 𝑛 ∈ ℝ𝑝 : measure noise vector with mean
𝔼𝐯 = 0 and cov 𝔼 𝐯(𝑛)𝐯 𝑇 (𝑛) = 𝐕(𝑛)
DERIVATION OF THE KALMAN FILTER EQNS
Then, given a prior estimate with mean Then,
𝐱ത 𝑛 − 1 and variance 𝚺 ෡ 𝑛−1 𝐞 𝑛
= 𝐀𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 + 𝐰 𝑛
1. Prediction: Our model resulted by using − 𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 =
the previous best estimate 𝐱ො 𝑛 − 1 is
𝐱ത 𝑛 = 𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 − 1 𝐀 𝐱 𝑛 − 1 − 𝐱ො 𝑛 − 1 +𝐰 𝑛

Whose error from the true value 𝐱 𝑛 is Whose state variance is given by
𝐞 𝑛 = 𝐱 𝑛 − 𝐱ത 𝑛 𝚺 𝐱 𝑛 = 𝔼 𝐞 𝑛 𝐞𝑇 𝑛 =
𝔼 ൤ 𝐀 𝐱 𝑛 − 1 − 𝐱ො 𝑛 − 1 +𝐰 𝑛−1
Since 𝑇
𝐱 𝑛 = 𝐀𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 + 𝐰(𝑛 − 1) × 𝐀 𝐱 𝑛 − 1 − 𝐱ො 𝑛 − 1 +𝐰 𝑛−1 ൨
DERIVATION OF THE KALMAN FILTER EQNS
𝚺𝐱 𝑛 = 𝐀𝔼 𝒆 𝑛 − 1 𝒆𝑇 (𝑛 − 1) 𝐀𝑻 + 𝔼 𝐰 𝑛 − 1 𝒆𝑇 (𝑛 − 1) = 0
𝐀𝔼 𝐰 𝑛 − 1 𝒆𝑇 (𝑛 − 1) + 𝔼 𝐞 𝑛 − 1 𝐰 𝑇 (𝑛 − 1) = 0
𝔼 𝐞 𝑛 − 1 𝐰 𝑇 (𝑛 − 1) 𝐀𝑻 + And
𝔼 𝐰 𝑛 − 1 𝐰 𝑇 (𝑛 − 1) 𝔼 𝐰 𝑛 − 1 𝐰 𝑇 (𝑛 − 1) = 𝐖(𝑛 − 1)

Recalling the assumptions Therefore


𝔼 𝐱 𝑛 𝐰 𝑇 𝑛 = 0, 𝔼 𝐳 𝑛 𝐯 𝑇 𝑛 = 0 𝚺𝐱 𝑛 = 𝐀𝚺𝐱 𝑛 − 1 𝐀𝑻 + 𝐖(𝑛 − 1)
𝔼 𝐰 𝑛 𝐰 𝑇 (𝑛) = 𝐖(𝑛)
𝔼 𝐯(𝑛)𝐯 𝑇 (𝑛) = 𝐕(𝑛) It is called the a priori covariance matrix
Then, of the error in open loop (without
𝐀𝔼 𝒆 𝑛 − 1 𝒆𝑇 (𝑛 − 1) 𝐀𝑻 feedback from measurements)
= 𝐀𝚺𝐱 𝑛 − 1 𝐀𝑻
DERIVATION OF THE KALMAN FILTER EQNS
In a similar way, we can determine the Finally, the covariance of the state and the
covariance of the feedback loop: measurement
𝚺𝑥𝑧 𝑛 = 𝔼 𝐞𝐱 𝑛 𝒆𝑇𝑧 𝑛
The mean Equation: = 𝔼 𝐞𝐱 𝑛 𝐇𝐞𝐱 𝑛 + 𝐯 n 𝑇
𝐳ത 𝑛 = 𝐇ത𝐱 𝑛
= 𝔼 𝐞𝐱 𝒏 𝐞𝐱 𝒏 𝑻 𝐇 𝑇 + 𝔼 𝐞𝐱 𝒏 𝐯 𝑻 n
Whose error is
𝒆𝑧 𝑛 = 𝐳 𝑛 − 𝐳ത 𝑛
𝔼 𝐞𝐱 𝒏 𝐞𝐱 𝒏 𝑻 = 𝚺𝐱 𝑛
= 𝐇 𝐱 𝑛 − 𝐱ത 𝑛 + 𝐯(n) 𝔼 𝐞𝐱 𝒏 𝐯 𝑻 n = 𝟎
= 𝐇𝐞𝐱 𝑛 + 𝐯(n)
And variance 𝚺𝑧 𝑛 𝚺𝑥𝑧 𝑛 = 𝚺𝐱 𝑛 𝐇 𝑇
𝔼 𝒆𝑧 𝑛 𝒆𝑇𝑧 𝑛 = 𝐇𝚺𝐱 𝑛 𝐇 + 𝐕 n
DERIVATION OF THE KALMAN FILTER EQNS
In summary Substituting, we get
𝚺𝐱 𝑛 = 𝐀𝚺𝐱 𝑛 − 1 𝐀𝑻 + 𝐖
𝚺𝐳 𝑛 = 𝐇𝚺𝐱 𝑛 𝐇 + 𝐕 𝐱ො 𝑛
𝚺𝐱𝐳 𝑛 = 𝚺𝐱 𝑛 𝐇 𝑇 = 𝚺𝐱 𝑛 𝐇 𝑇 𝐇𝚺𝐱 𝑛 𝐇 + 𝐕 −𝟏 ൫𝐳 𝑛
− 𝐇ത𝐱 𝑛 ൯ + 𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 − 1
From the best linear optimal prediction
theorem, the best estimate of 𝐱 𝑛 given =
𝐳 𝑛 is 𝚺𝐱 𝑛 𝐇 𝑇 𝐇𝚺𝐱 𝑛 𝐇 + 𝐕 −𝟏
× 𝐳 𝑛 − 𝐇 𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 − 1
𝐱ො 𝑛
−1 + 𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 − 1
= 𝚺𝑥𝑧 𝑛 𝚺𝑧 𝑛 𝐳 𝑛 − 𝐳ത 𝑛
+ 𝐱ത 𝑛
DERIVATION OF THE KALMAN FILTER EQNS
From the above Equation, In general, the Kalman filter is an state
▪ Prediction observer with time-variant gain.
𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 − 1 : Prediction (mean)
Finally the variance 𝚺𝑒 of the propagation
▪ Correction error 𝐱 − 𝐱ො , given by
𝚺𝐱 𝑛 𝐇𝑇 𝐇𝚺𝐱 𝑛 𝐇 + 𝐕 −𝟏 : Kalman gain
𝚺𝑒 = 𝚺𝑥 − 𝚺𝑥 𝚺𝑧−1 𝚺𝑥𝑧
𝑇
𝐳 𝑛 − 𝐇 𝐀ො𝐱 𝑛 − 1 + 𝐁𝐮 𝑛 − 1 :
Mean error (observed mean) =
𝚺𝑥 𝑛
− 𝚺𝑥 𝑛 𝐇𝚺𝐱 𝑛 𝐇 + 𝐕 −1 𝐇 𝚺𝐱 𝑇 𝑛
Initial estimates
𝐱ො −1
෡ −1
𝚺 𝐱ො − 𝑛 = 𝐀ො𝐱 𝑛 − 1 + 𝐁𝒖 𝑛

෡ − 𝑛 = 𝐀𝚺
𝚺 ෡ 𝑛 − 1 𝐀𝑇 + 𝐐

Prediction

Correction

−𝟏
෡ − 𝑛 𝐇 𝑇 𝐇𝚺
K(𝑛) = 𝚺 ෡ − 𝑛 𝐇𝑇 + 𝐑

𝐱ො 𝑛 = 𝐱ො − 𝑛 + K 𝑛 𝐳 𝑛 − 𝐇ො𝐱 − 𝑛 − 1
෡ − : Prediction
𝚺
෡: Estimation
𝚺 ෡ 𝑛 =𝚺
෡ − 𝑛 − K 𝑛 𝐇𝚺
෡− 𝑛
𝚺
KALMAN FILTER
For instance, on a cartesian 2D plane, if The State Vector
we are only capable of measuring the 𝑥
trajectory of a starship 𝑦
𝑥 𝑣𝑥
𝐳 𝑛 = 𝑦 ∈ ℝ2 𝐱 𝑛 = 𝑣 ∈ ℝ6
𝑦
However, sensors are not perfect, and 𝑎𝑥
they always contains errors 𝑎𝑦

True Trajectory Then, 𝑝 = 6 and 𝑚 = 2


𝐀 ∈ ℝ6×6
𝐇 ∈ ℝ2×6
Measured Trajectory
We also assume no external control
KALMAN FILTER
For instance, on a cartesian 2D plane, if System Dynamics Model: Assuming
we are only capable of measuring the constant acceleration
trajectory of a starship ▪ Horizontal Dynamics
𝑥 𝐱ሷ (𝑛 + 1) = 𝐱ሷ 𝑛
𝐳 𝑛 = 𝑦
𝐱ሶ 𝑛 + 1 = 𝐱ሶ 𝑥 𝑛 + Δ𝑡𝐱ሷ 𝑥 𝑛
However, sensors are not perfect, and Δ𝑡 2

they always contains errors 𝑥 𝑛 + 1 = 𝑥 𝑛 + Δ𝑡𝐱ሶ 𝑥 𝑛 +


2
𝐱ሷ 𝑥 𝑛

True Trajectory ▪ Vertical Dynamics


𝐱ሷ 𝑦 𝑛 + 1 = 𝐱ሷ 𝑦 𝑛
𝐱ሶ 𝑦 𝑛 + 1 = 𝐱ሶ 𝑦 𝑛 + Δ𝑡𝐱ሷ 𝑦 𝑛
Δ𝑡 2
Measured Trajectory 𝑦 𝑛 + 1 = 𝑦 𝑛 + Δ𝑡𝐱ሶ 𝑦 𝑛 + 𝐱ሷ 𝑦 𝑛
2
KALMAN FILTER
For instance, on a cartesian 2D plane, if Dynamical System:
we are only capable of measuring the ▪ Project onto the next state.
trajectory of a starship ▪ Represents the system’s equations
𝑥 into a matrix
𝐳 𝑛 = 𝑦
▪ Each column is a variable of 𝐱 n
However, sensors are not perfect, and ▪ Each row is a system equation
they always contains errors

True Trajectory
𝐀=

Measured Trajectory
KALMAN FILTER
For instance, on a cartesian 2D plane, if The first row is associated to 𝑥 𝑛 + 1
we are only capable of measuring the
trajectory of a starship 𝑥
1 0 Δ𝑡 0 .5Δ𝑡 2 0
𝑥 𝑦
𝐳 𝑛 = 𝑦 0 1 0 Δ𝑡 0 .5Δ𝑡 2
0 0 1 0 Δ𝑡 0 𝐱ሶ 𝑥
However, sensors are not perfect, and 0 0 0 1 0 Δ𝑡 𝐱ሶ 𝑦
they always contains errors 0 0 0 0 1 0 𝐱ሷ 𝑥
0 0 0 0 0 1 𝐱ሷ 𝒚
True Trajectory
𝑥 𝑛+1
Δ𝑡 2
= 1 ⋅ 𝑥 𝑛 + Δ𝑡𝐱ሶ 𝑥 𝑛 + 𝐱ሷ 𝑥 𝑛
Measured Trajectory 2
KALMAN FILTER
For instance, on a cartesian 2D plane, if Then we can setup the prior estimates of
we are only capable of measuring the the covariance matrices for the
trajectory of a starship measurement and process noise as well
𝑥 as the first state and covariance of error
𝐳 𝑛 = 𝑦
▪ Based on measurements
However, sensors are not perfect, and ▪ Setting small variances
they always contains errors

True Trajectory

Measured Trajectory
KALMAN FILTER
For instance, on a cartesian 2D plane, if For instance, we measured N
we are only capable of measuring the measurements on the x-axis of an
trajectory of a starship accelerometer without external
𝑥 perturbations, and observe the following
𝐳 𝑛 = 𝑦
However, sensors are not perfect, and
they always contains errors

True Trajectory
We compute the variance, and saw that
𝜎 2 = 0.1, therefore we can setup
Measured Trajectory 𝜎𝑥2 0 2×2
𝐑= ∈ ℝ
0 𝜎𝑦2
KALMAN FILTER
Suppose that the first measurement was Then, the prediction stage at n=0
3
𝐳 0 =
2 𝐱ො − 1 = 𝐀ො𝐱 0 + 𝐁𝒖 𝑛
We also assume Δ𝑡 = 0.001𝑠 and 𝑔 = 9.18𝑚/𝑠 2
.1 0 0 0 0 0 0 1 0 1e−3 0 5e−7 0 0
0 .1 0 0 0 0 0 0 1 0 1e−3 0 5e−7 0
1e−3 0
.01 0 0 0 .1 0 0 0 0 𝐱ො − 1 = 0 0 1 0 0
𝐑= ,𝐐= , 𝐱ො 0 = 0 0 0 1 0 1e−3 0
0 .01 0 0 0 .1 0 0 0 0 0 0 0 1 0 0
0 0 0 0 .1 0 0 0 0 0 0 0 1 𝑔
0 0 0 0 0 .1 𝑔
0
.3 0 0 0 0 0 1 0 1e−3 0 5e−7 0 .0459
0 .3 0 0 0 0 0 1 0 1e−3 0 5e−7
෡0 = 0 0 .2 0 0 0 1e−3 0
𝚺
0 0 0 .2 0 0
,𝐀 = 0
0
0
0
1
0
0
1 0
0
1e−3 𝐱ො − 1 =
0 0 0 0 .1 0 0 0 0 0 1 0 .0092
0 0 0 0 0 .1 0 0 0 0 0 1 0
9.18
KALMAN FILTER
Suppose that the first measurement was The posterior predicted cov error
3
𝐳 0 =
2
෡ − 1 = 𝐀𝚺
𝚺 ෡ 0 𝐀𝑇 + 𝐐
We also assume Δ𝑡 = 0.001𝑠 and 𝑔 = 9.18𝑚/𝑠 2
.1 0 0 0 0 0 0
0
෡− 1 =
𝚺
0 .1 0 0 0 0
.01 0 0 0 .1 0 0 0 0
𝐑= ,𝐐= , 𝐱ො 0 = .4 0 2𝑒 −4 0 0 0
0 .01 0 0 0 .1 0 0 0
0 0 0 0 .1 0 0 0 .4 0 2𝑒 −4 0 5e−4
0 0 0 0 0 .1 𝑔 2𝑒 −4 0 .3 0 1e−4 0
0 2𝑒 −4 0 .3 0 1e−4
0 0 1𝑒 −4 0 .2 0
.3 0 0 0 0 0 1 0 1e−3 0 5e−7 0
0 .3 0 0 0 0 0 1 0 1e−3 0 5e−7 0 5𝑒 −4 0 1𝑒 −4 0 .2
෡0 = 0
𝚺
0 .2 0 0 0
,𝐀 = 0 0 1 0 1e−3 0
0 0 0 .2 0 0 0 0 0 1 0 1e−3
0 0 0 0 .1 0 0 0 0 0 1 0
0 0 0 0 0 .1 0 0 0 0 0 1
KALMAN FILTER
Suppose that the first measurement was Then, the correction stage at n=0
3
𝐳 0 =
2 Kalman Gain
We also assume Δ𝑡 = 0.001𝑠 and 𝑔 = 9.18𝑚/𝑠 2 ෡ − 1 𝐇𝑇 𝐇𝚺
K(0) = 𝚺 ෡ − 1 𝐇𝑇 + 𝐑 −𝟏
.1 0 0 0 0 0 0
0 .1 0 0 0 0 0
.01 0 0 0 .1 0 0 0 0 K 0 =
𝐑= ,𝐐= , 𝐱ො 0 =
0 .01 0 0 0 .1 0 0 0
0 0 0 0 .1 0 0
0 0 0 0 0 .1 𝑔

.3 0 0 0 0 0 1 0 1e−3 0 5e−7 0
0 .3 0 0 0 0 0 1 0 1e−3 0 5e−7
෡0 = 0
𝚺
0 .2 0 0 0
,𝐀 = 0 0 1 0 1e−3 0
0 0 0 .2 0 0 0 0 0 1 0 1e−3
0 0 0 0 .1 0 0 0 0 0 1 0
0 0 0 0 0 .1 0 0 0 0 0 1
KALMAN FILTER
Suppose that the first measurement was Then, the correction stage at n=0
3
𝐳 0 =
2 Corrected state
We also assume Δ𝑡 = 0.001𝑠 and 𝑔 = 9.18𝑚/𝑠 2 𝐱ො 1 = 𝐱ො − 1 + K 0 𝐳 0 − 𝐇ො𝐱 − 1
.1 0 0 0 0 0 0
0 .1 0 0 0 0 0
.01 0 0 0 .1 0 0 0 0 𝐱ො 1 =
𝐑= ,𝐐= , 𝐱ො 0 =
0 .01 0 0 0 .1 0 0 0
0 0 0 0 .1 0 0
0 0 0 0 0 .1 𝑔

.3 0 0 0 0 0 1 0 1e−3 0 5e−7 0
0 .3 0 0 0 0 0 1 0 1e−3 0 5e−7
෡0 = 0
𝚺
0 .2 0 0 0
,𝐀 = 0 0 1 0 1e−3 0
0 0 0 .2 0 0 0 0 0 1 0 1e−3
0 0 0 0 .1 0 0 0 0 0 1 0
0 0 0 0 0 .1 0 0 0 0 0 1
KALMAN FILTER
Suppose that the first measurement was Then, the correction stage at n=0
3
𝐳 0 =
2 Corrected error covariance matrix
We also assume Δ𝑡 = 0.001𝑠 and 𝑔 = 9.18𝑚/𝑠 2 ෡ 1 =𝚺
𝚺 ෡ − 1 − K 0 𝐇𝚺 ෡− 1
.1 0 0 0 0 0 0
0 .1 0 0 0 0 0
.01 0 0 0 .1 0 0 0 0 ෡ 1 =
𝚺
𝐑= ,𝐐= , 𝐱ො 0 =
0 .01 0 0 0 .1 0 0 0
0 0 0 0 .1 0 0
0 0 0 0 0 .1 𝑔

.3 0 0 0 0 0 1 0 1e−3 0 5e−7 0
0 .3 0 0 0 0 0 1 0 1e−3 0 5e−7
෡0 = 0
𝚺
0 .2 0 0 0
,𝐀 = 0 0 1 0 1e−3 0
0 0 0 .2 0 0 0 0 0 1 0 1e−3
0 0 0 0 .1 0 0 0 0 0 1 0
0 0 0 0 0 .1 0 0 0 0 0 1
KALMAN FILTER
Suppose that the first measurement was Therefore, the best estimation, for
3
𝐳 0 =
2 time n=1, is 𝐱ො 1 :
We also assume Δ𝑡 = 0.001𝑠 and 𝑔 = 9.18𝑚/𝑠 2
.1 0 0 0 0 0 0
0 .1 0 0 0 0 0
.01 0 0 0 .1 0 0 0 0
𝐑= ,𝐐= , 𝐱ො 0 =
0 .01 0 0 0 .1 0 0 0
0 0 0 0 .1 0 0
0 0 0 0 0 .1 𝑔

.3 0 0 0 0 0 1 0 1e−3 0 5e−7 0
0 .3 0 0 0 0 0 1 0 1e−3 0 5e−7 This process is repeated at
෡0 = 0 0 .2 0 0 0 1e−3
𝚺
0 0 0 .2 0 0
,𝐀 = 0
0
0
0
1
0
0
1 0
0
1e−3
every sampling time “n”
0 0 0 0 .1 0 0 0 0 0 1 0
0 0 0 0 0 .1 0 0 0 0 0 1
KALMAN FILTER
If the system is in repose, the Kalman The Kalman Gain provides a balance (weight) of
filter will converge to the true value the measurement and the process model

I.e., it minimizes the uncertainty of the


The previous estimate provides a more confident
measurement
value, then current estimate tends to this value
rather than the measured value
KALMAN FILTER: APPLICATIONS

Object tracking Trajectory planning on quadcopters


Object tracking 𝐁≠𝟎
𝒖: Acceleration (control signal)
𝐳: 3D Position
𝐱: Position and elevation

Trajectory estimation
FINAL TAUGHTS
When the source of unwanted signal is
stationary: Wiener Filter or Adaptive Filter

When the source of unwanted signal is Stationary signal (mean/variance keeps constant over time)
non-stationary & linear filter: Kalman Filter

When the source of unwanted signal is


either stationary or non-stationary & non-
linear filter: Neural Networks.

All these three filters are time-varying


filters Non-stationary signal (mean/variance
don’t keep constant over time)

You might also like