0% found this document useful (0 votes)
20 views20 pages

Prediction

The document discusses linear prediction techniques including forward and backward prediction. Forward prediction uses past samples to predict future values while backward prediction uses future samples to predict past values. Both aim to minimize the mean square prediction error. The predictors can be represented as filters and the weights are determined using Wiener-Hopf equations.

Uploaded by

Sai Ranga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views20 pages

Prediction

The document discusses linear prediction techniques including forward and backward prediction. Forward prediction uses past samples to predict future values while backward prediction uses future samples to predict past values. Both aim to minimize the mean square prediction error. The predictors can be represented as filters and the weights are determined using Wiener-Hopf equations.

Uploaded by

Sai Ranga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Prediction

Adaptive Filter Theory


Advanced Digital Signal Processing

September 30, 2020

1/20
Advanced Digital Signal Processing
General Introduction

• It refers to predicting the future value of a stationary


discrete-time stochastic process, given a set of past samples of
the process
• In linear prediction, the predicted value is represented as a
linear combination of the past samples.
• Predictors- Forward prediction and backward prediction

2/20
Advanced Digital Signal Processing
Forward Prediction
Considering a forward predictor of a nite duration impulse
response (FIR) lter with M tap weights as wf 1, , wf,2 , · · · wf,M
and tap inputs as u(n − 1), · · · u(n − M ).
The objective is to make an estimate of u(n) using the past sample
values.
The tap weights are optimized in mean square error sense according
to the Wiener lter theory. The predicted value is
M
X

û(n) = wf,k u(n − k) (1)
k=1

In case of forward linear prediction, d(n) = u(n).


The forward prediction error equals the dierence between the
input sample u(n) and its predicted value û(n).
Denoting the forward prediction error as fM (n)

fM (n) = u(n) − û(n) (2)


3/20
Advanced Digital Signal Processing
One Step Predictor

4/20
Advanced Digital Signal Processing
M denotes the order of the predictor.
Minimum mean-square prediction error can be denoted as

PM = E[|fM (n)|2 ] (3)

Let wf denote the M × 1 optimum tap weight vector of the


forward predictor

wf = [wf,1 , wf,2 , · · · wf,M ]T (4)

To solve the Wiener Hopf equations for the weight vector wf , the
following two quantities are required as
1 M × M correlation matrix of the tap inputs
2 M × 1 cross correlation vector between the tap inputs and
desired response

5/20
Advanced Digital Signal Processing
The tap input vector is dened as

u(n − 1) = [u(n − 1), u(n − 2), · · · u(n − M )]T (5)

The correlation matrix R is dened as


 
r0 r1 · · · rM −1
 r∗ r0 · · · rM −2 
 1
R = E[u(n − 1)uH (n − 1)]  . .. .. ..  (6)

 .. . . . 

rM ∗ ···
−1 rM −2 r0

6/20
Advanced Digital Signal Processing
The cross correlation vector between the tap input and desired
response u(n) can be written as

r = E[u(n − 1)u(n)] = [r(1), r(2), · · · r(M )]T


= [r(−1), r(−2), · · · r(−M )]T (7)

The WienerHopf equations to solve the forward linear prediction


(FLP) problem for stationary inputs as

Rwf = r (8)

7/20
Advanced Digital Signal Processing
Forward Prediction Error Filter
The forward predictor consists of
1 M unit delay elements
2 M tap weights wf,1 , wf,2 , · · · wf,M
3 M length input vector as u(n − 1), u(n − 2), · · · u(n − M )
The forward predicted error can be written as
M
X
fM (n) = u(n) − wf,k u(n − k) (9)
k=1

Let aM,k , k = 0, 1, · · · M denotes the tap weights of new FIR lter


which is related to the previous lter as
(
1, k = 0
aM,k = (10)
−wf,k , k = 1, 2, · · · M

8/20
Advanced Digital Signal Processing
M
X
fM (n) = aM,k u(n − k) (11)
k=0

A lter that operates on the set of samples u(n − 1), · · · u(n − M )


to produce the forward prediction error fM (n) at its output is
called a forward prediction error lter

9/20
Advanced Digital Signal Processing
Prediction Error Filter

10/20
Advanced Digital Signal Processing
Relationship between predictor and predictor error lter

11/20
Advanced Digital Signal Processing
Backward Linear Prediction

It may be operated on the same time series in the backward


direction; that is, we may use the subset of M samples
u(n), · · · u(n − M + 1) to make a prediction of the sample
u(n − M ).
This second operation corresponds to backward linear prediction by
one step, measured with respect to time n − M + 1.
We make linear prediction of the sample u(n − M ) as
M
X
û(n − M ) = wb,k u(n − k + 1) (12)
k=1

where, wb,1 , wb,2 , · · · wb,M are the tap weights.


In case of backward prediction, the desired signal is
d(n) = u(n − M ).

12/20
Advanced Digital Signal Processing
Backward Linear Prediction Contd...

The backward prediction error bM (n) can be calculated as

bM (n) = u(n − M ) − û(n − M ) (13)

Minimum mean-square prediction error is obtained as

PM = E[|bM (n)|2 ] (14)

Let wb denote the M − by − 1 optimum tap-weight vector of the


backward predictor

wb = [wb,1 , wb,2 , · · · wb,M ]T (15)

13/20
Advanced Digital Signal Processing
To solve the Wiener-Hopf equation corresponding to the weight
vector wb , the requirement is

R = E[uH (n)u(n)] (16)


where, u(n) = [u(n), u(n − 1), · · · u(n − M + 1)]T
M × 1 cross-correlation vector between the tap inputs and desired
response is

rB = E[u(n)u(n − M )] = [r(M ), r(M − 1), · · · r(1)]T (17)

The variance of the desired response u(n − M ) equals r(0).


The Wiener Hopf equation is used to solve the backward linear
prediction problem as
Rwb = rB (18)

14/20
Advanced Digital Signal Processing
Backward Prediction Error Filter
The given set of input samples as
u(n) = [u(n), u(n − 1), · · · u(n − M + 1)]T and desired response
d(n) = u(n − M )
Prediction error is denoted as
M
X
bM (n) = u(n − M ) − wb,k u(n − k + 1) (19)
k=1

Let us dene another set of coecient vector cM,k in terms of the


corresponding backward predictor as follows:
(
−wb,k+1 ; k = 0, 1, · · · M − 1
cM,k = (20)
1; k = M

M
X
bM (n) = cM,k u(n − k) (21)
k=0
15/20
Advanced Digital Signal Processing
Bakward one-step predictor

16/20
Advanced Digital Signal Processing
Backward Prediction Error Filter

17/20
Advanced Digital Signal Processing
Relation between backward and forward predictor

Rwb = rB (22)
RT wbB = r (23)
where, wbB is the backward version of the weight vector wb .

Rwf = r (24)

wbB = wf (25)
We may modify a backward predictor into a forward predictor by
reversing the sequence in which its tap weights are positioned and
also taking the complex conjugates of them if weights are complex.

18/20
Advanced Digital Signal Processing
Backward Prediction Error Filter in terms of forward
coecient

19/20
Advanced Digital Signal Processing
Backward Prediction Error Filter Contd...

wb,M −k+1 = wf,k ; k = 1, 2 · · · M (26)


wb,k = wf,M −k+1 ; k = 1, 2 · · · M (27)
(
−wf,k ; k = 0, 1, · · · M − 1
cM,k = (28)
1; k = 0
cM,k = aM,M −k ; k = 0, 1, · · · M (29)

20/20
Advanced Digital Signal Processing

You might also like