0% found this document useful (0 votes)
22 views27 pages

4 Adc

Uploaded by

hs2141741
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views27 pages

4 Adc

Uploaded by

hs2141741
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Demodulation / Detection

7th Dec, 2020

 CHAPTER 3
 Detection of Binary Signal in Gaussian Noise
 Matched Filters and Correlators
 Bayes’ Decision Criterion
 Maximum Likelihood Detector
 Error Performance
Demodulation and Detection
AWGN

DETECT
DEMODULATE & SAMPLE
SAMPLE
at t = T
RECEIVED
WAVEFORM FREQUENCY
RECEIVING EQUALIZING
DOWN
FILTER FILTER THRESHOLD MESSAGE
TRANSMITTED CONVERSION
WAVEFORM COMPARISON SYMBOL
OR
CHANNEL
FOR COMPENSATION
SYMBOL
BANDPASS FOR CHANNEL
SIGNALS INDUCED ISI

OPTIONAL

ESSENTIAL

Figure 3.1: Two basic steps in the demodulation/detection of digital signals

The digital receiver performs two basic functions:


 Demodulation, to recover a waveform to be sampled at t = nT.
 Detection, decision-making process of selecting possible digital symbol
Detection of Binary Signal in Gaussian Noise
2

-1

-2
0 2 4 6 8 10 12 14 16 18 20

-1

-2
0 2 4 6 8 10 12 14 16 18 20

-1

-2
0 2 4 6 8 10 12 14 16 18 20
Detection of Binary Signal in Gaussian Noise

 For any binary channel, the transmitted signal over a symbol interval
(0,T) is:
s0 (t ) 0  t  T for a binary 0
si (t )  
 s1 (t ) 0  t  T for a binary 1

 The received signal r(t) degraded by noise n(t) and possibly


degraded by the impulse response of the channel hc(t), is

r ( t )  s i ( t ) * hc ( t )  n ( t ) i  1, 2 (3.1)
Where n(t) is assumed to be zero mean AWGN process
 For ideal distortionless channel where hc(t) is an impulse function
and convolution with hc(t) produces no degradation, r(t) can be
represented as:
r (t )  s (t )  n(t ) i  1,2
i 0t T (3.2)
Detection of Binary Signal in Gaussian Noise

 The recovery of signal at the receiver consist of two parts


 Filter
 Reduces the received signal to a single variable z(T)
 z(T) is called the test statistics
 Detector (or decision circuit)
 Compares the z(T) to some threshold level 0 , i.e.,
H 1

z (T ) 
 where H1 and H0 are the two
 0
H 0 possible binary hypothesis
Receiver Functionality
The recovery of signal at the receiver consist of two parts:
1. Waveform-to-sample transformation
 Demodulator followed by a sampler
 At the end of each symbol duration T, predetection point yields a
sample z(T), called test statistic
z(T )  a (t)  n (t ) i  1,2
i 0
(3.3)

Where ai(T) is the desired signal component,


and no(T) is the noise component
2. Detection of symbol
 Assume that input noise is a Gaussian random process and
receiving filter is linear

1  1  n0  
2

p ( n0 )  exp      (3.4)
 0 2  2   0  
 Then output is another Gaussian random process

1  1  z  a0  
2

p(z | s0 )  exp    
 0 2  2   0  

1  1  z  a1  
2

p( z | s1 )  exp    
 0 2  2   0  
Where 0 2 is the noise variance
 The ratio of instantaneous signal power to average noise power ,
(S/N)T, at a time t=T, out of the sampler is:
 S  a i2
   (3.45)
 N T  02

 Need to achieve maximum (S/N)T


Find Filter Transfer Function H0(f)

 Objective: To maximizes (S/N)T


 Expressing signal ai(t) at filter output in terms of filter transfer
function H(f)

a i (t )  

H ( f ) S ( f ) e j 2  ft df (3.46)

where S(f) is the Fourier transform of input signal s(t)


 Output noise power can be expressed as:
N0 
  
2
0 | H ( f ) | 2 df
2  (3.47)
 Expressing (S/N)T as:
 2


j 2  fT
H ( f ) S( f ) e df
 S  
  
 N T N0 
(3.48)
2 

| H ( f ) | 2 df
 For H(f) = H0(f) to maximize (S/N)T, ; use Schwarz’s Inequality:

 2  2  2

 
f1 ( x) f 2 ( x)dx  

f1 ( x) dx  
f 2 ( x) dx (3.49)

 Equality holds if f1(x) = k f*2(x) where k is arbitrary constant and *


indicates complex conjugate
 Associate H(f) with f1(x) and S(f) ej2 fT with f2(x) to get:

 2  2  2


H ( f ) S ( f ) e j 2fT df   H ( f ) df
 
S ( f ) df (3.50)

 Substitute in eq-3.48 to yield:


S  2  2
  
 N T N 0

S ( f ) df (3.51)
 S  2E
 Or max    and energy E of the input signal s(t):
 N T N0
 2
 Thus (S/N)T depends on input signal energy E
and power spectral density of noise and
 

S ( f ) df

NOT on the particular shape of the waveform

S  2E
 Equality for max    holds for optimum filter transfer
 N T N 0
function H0(f)
such that:
H ( f )  H 0 ( f )  kS * ( f ) e  j 2fT (3.54)

h ( t )    1 kS * ( f ) e  j 2  fT  (3.55)

 For real valued s(t):  kS (T  t ) 0  t  T


h (t )   (3.56)
0 else where
 The impulse response of a filter producing maximum output signal-
to-noise ratio is the mirror image of message signal s(t), delayed by
symbol time duration T.
 The filter designed is called a MATCHED FILTER

 kS (T  t ) 0  t  T
h (t )  
0 else where

 Defined as:
a linear filter designed to provide the maximum
signal-to-noise power ratio at its output for a given
transmitted symbol waveform
Correlation realization of Matched filter
 A filter that is matched to the waveform s(t), has an impulse
response
 kS (T  t ) 0tT
h (t )  
0 else where
 h(t) is a delayed version of the mirror image (rotated on the t = 0
axis) of the original signal waveform

Signal Waveform Mirror image of signal Impulse response of


waveform matched filter
Figure 3.7
 This is a causal system
 Recall that a system is causal if before an excitation is applied at

time t = T, the response is zero for -  < t < T


 The signal waveform at the output of the matched filter is
t (3.57)
z (t )  r (t ) * h (t )  0
r ( )h ( t   ) d 

 Substituting h(t) to yield:

r ( ) s T  ( t   ) d 
t
z (t )  0

r ( ) s T  t   d 
t
 0 (3.58)
 When t=T,
T
z (t )   0
r ( ) s ( ) d 
(3.59)
 The function of the correlator and matched filter are the same

 Compare (a) and (b)


T
 From (a)
z (t )  0
r ( t ) s ( t ) dt

T
z (t ) t T  z (T )   r ( ) s( )d
0
From (b)  t

z' (T )  r(t) *h(t)   r( )h(t  )d   r( )h(t  )d
 0

But
h(t)  s(T  t)  h(t  )  s[T  (t  )]  s(T   t)
t
 z ' (t )   r ( ) s (  T  t ) d
0

 At the sampling instant t = T, we have


T T
z' (t ) t T  z' (t )   r ( )s(  T  T )d   r ( )s( )d
0 0

 This is the same result obtained in (a)


T
z ' (T )  
0
r ( ) s ( ) d 
 Hence
z(T )  z' (T )
Detection
 Matched filter reduces the received signal to a single variable z(T), after
which the detection of symbol is carried out
 The concept of maximum likelihood detector is based on Statistical
Decision Theory
 It allows us to
 formulate the decision rule that operates on the data

 optimize the detection criterion

H 1

z (T ) 

 0
H 0
Probabilities Review

 P[s0], P[s1]  a priori probabilities


 These probabilities are known before transmission
 P[z]
 probability of the received sample
 p(z|s0), p(z|s1)
 conditional pdf of received signal z, conditioned on the class si
 P[s0|z], P[s1|z]  a posteriori probabilities
 After examining the sample, we make a refinement of our
previous knowledge
 P[s1|s0], P[s0|s1]
 wrong decision (error)
 P[s1|s1], P[s0|s0]
 correct decision
How to Choose the threshold?
 Maximum Likelihood Ratio test and Maximum a posteriori (MAP)
criterion:
If

p ( s0 | z )  p ( s1 | z )   H 0
else
p ( s1 | z )  p ( s0 | z )   H 1

 Problem is that a posteriori probability are not known.


 Solution: Use Bay’s theorem:
p( z | s ) p(s )
p(s | z)  i i
i p(z)

H1 H1
p( z | s1 ) P(s1 ) p( z | s0 ) P ( s0 )
 

 p( z | s1) P(s )
1 
p( z | s0 ) P(s0 )
P( z ) H0
P( z) H0
 MAP criterion:
H1
p ( z | s1 ) P (s0 )
L( z)  

 likelihood ratio test ( LRT )
p( z | s0 ) H0
P ( s1 )

 When the two signals, s0(t) and s1(t), are equally likely, i.e., P(s0) =
P(s1) = 0.5, then the decision rule becomes
H1
p ( z | s1 )
L( z)  

1  max likelihood ratio test
p( z | s0 ) H0
 This is known as maximum likelihood ratio test because we are
selecting the hypothesis that corresponds to the signal with the
maximum likelihood.

 In terms of the Bayes criterion, it implies that the cost of both types
of error is the same
 Substituting the pdfs

1  1  z  a0  
2

H0 : p( z | s0 )  exp    
 0 2  2   0  

1  1  z  a1  
2

H1 : p ( z | s1 )  exp     
 0 2  2   0  

H1 1  1 2 H1
exp   z  a1  
p ( z | s1 )   0 2  2 0  
L( z)  1 1
p ( z | s0 )  1  1 2 
exp   z  a 0  
H0  0 2  2 0  H0
 Hence:

 z ( a1  a 0 ) ( a12  a 02 )  
exp    1
 02
2 02
 

 Taking the log of both sides will give

H1
z (a1  a0 ) (a12  a02 ) 
  ln{L( z )}   0
02
2 02

H0

H1
z ( a1  a 0 )  ( a12  a 02 ) ( a1  a 0 )( a1  a 0 )
 
0 2
 2 02
2 02
H0
 Hence

H1 H1
  02 (a1  a0 )(a1  a0 )  ( a1  a0 )
z z 0
 2 02 (a1  a0 )  2
H0 H0

where z is the minimum error criterion and  0 is optimum threshold


 For antipodal signal, s1(t) = - s0 (t)  a1 = - a0

H1

z 0

H0
This means that if received signal was positive, s1 (t) was sent,
else s0 (t) was sent
Probability of Error
 Error will occur if
 s1 is sent  s0 is received

P ( H 0 | s1 )  P (e | s1 )
0
P (e | s1 )   p ( z | s1 ) dz

 s0 is sent  s1 is received
P ( H 1 | s0 )  P (e | s0 )

P (e | s0 )   0
p ( z | s 0 ) dz
 The total probability of error is the sum of the errors
2
PB   P (e, si )  P ( e | s1 ) P ( s1 )  P (e | s0 ) P ( s0 )
i 1

 P ( H 0 | s1 ) P ( s1 )  P ( H 1 | s0 ) P ( s0 )
 If signals are equally probable
PB  P ( H 0 | s1 ) P ( s1 )  P ( H 1 | s0 ) P ( s 0 )
1
 P ( H 0 | s1 )  P ( H 1 | s0 ) 
2
1
PB  P( H 0 | s1 )  P( H1 | s0 ) bySymmetry
 P( H1 | s0 )
2
 Hence, the probability of bit error PB, is the probability that an
incorrect hypothesis is made
 Numerically, PB is the area under the tail of either of the conditional
distributions p(z|s1) or p(z|s2)
 
PB   0
P ( H 1 | s 0 ) dz   0
p ( z | s 0 ) dz

 1  1 za 
2

  0 0 2
exp   
 2   0
0


 dz

 1  1za 
2

PB   exp    0
  dz
0 
0 2  2   0  
( z  a0 )
 u
0
 1  u2 
 ( a1  a 0 )
2 0 2
exp  
 2 
 du

 The above equation cannot be evaluated in closed form (Q-function)


 Hence,
 a1  a 0 
PB  Q    equation B .18
 2 0 
1  x2 
Q ( x)  exp   
x 2  2
Conclusion

You might also like