ECE 461 Fall 2006
Optimum Reception in AWGN
◦ Restricting to the case of memoryless modulation with no ISI (ideal AWGN channel), we can
focus on one symbol interval [0, Ts ] without loss of optimality. We will also assume perfect syn-
chronization at the receiver to begin the analysis. The received signal model in AWGN is then
r(t) = s(t) + w(t) 0 ≤ t ≤ Ts
The signal s(t) ∈ {s1 (t), . . . , sM (t)}, and the goal of the receiver is to determine which symbol m
(equivalently, which signal sm (t)) was sent on the channel. Without the additive noise, this is a
trivial problem as long as the signals are different, i.e., d km 6= 0, for k 6= m. What do we do in
the presence of noise? As we saw in class, the optimum receiver can be split into two steps:
◦ Step 1: Demodulation. Projection of r(t) on to basis functions f 1 (t), . . . , fN (t) of the signal space
to form the vector of sufficient statistics:
R = [R1 R2 . . . RN ]> , Rn = hr(t), fn (t)i
◦ Step 2: Detection. Decide which symbol was sent based on R.
◦ Why is it okay to split into these two steps? Principle of Irrelevance
The sufficient statistics can be written as:
Rn = hr(t), fn (t)i = hs(t), fn (t)i + hw(t), fn (t)i = sn + wn
where sn = hs(t), fn (t)i and wn = hw(t), fn (t)i.
Note that s(t) can be represented without error (why?) in terms of the coefficients {s n }:
X
N
s(t) = sn fn (t)
n=1
However, there is an error in the representation of the noise, since w(t) may not belong to the
signal space S, i.e.,
XN
w(t) = wn fn (t) + w0 (t)
n=1
where w0 (t) is the representation error. Thus
X
N X
N
r(t) = sn fn (t) + wn fn (t) + w0 (t)
n=1 n=1
As we argued in class, w 0 (t) is irrelevant for decision-making since it contains no information about
the signal that is transmitted. The first two terms in the above sum are sufficient for decision
making, and the sum of these two terms is equivalent to the vector R of sufficient statistics.
Thus R is sufficient for decision-making about the signal, and the split up of the receiver into the
demodulation and detection stages is justified.
c
V.V. Veeravalli, 2006 1
◦ Correlation as Matched Filtering
By defining the filter with impulse response h n that satisfies hn (Ts − t) = fn? (t), we see that
Z Ts Z Ts Z ∞
Rn = r(t)fn? (t)dt = r(t)hn (Ts − t)dt = r(t)hn (Ts − t)dt = r ∗ hn (Ts )
0 0 −∞
Thus the sufficient statistic Rn can be obtained by passing r(t) through a matched filter with
impulse response hn (t) and sampling the output at time T s .
Note that hn (t) = fn? (Ts − t) and thus hn is a causal filter with impulse response limited to [0, T s ].
◦ SNR Maximization and Matched Filtering
We showed in class, using the Cauchy-Shwarz Inequality, that the matched filter also maximizes
the signal-to-noise ratio (SNR) at the output of the receiver. This is another justification for the
demodulation step.
◦ In summary, the vector of sufficient statistics R can be obtained through a bank of correlators
{fn }, or equivalently through a bank of matched filters {h n } followed by sampling at the symbol
period.
R=s+W
where W is CN (0, N0 I), i.e., {Wn } are i.i.d. CN (0, N0 ) random variables.
◦ We now study the detection step in more detail.
Optimum Detection
◦ The information we can use to distinguish between the various symbols is statistical and is con-
tained in the conditional distributions of R, given that each symbol is sent. These conditional
distributions are also called likelihood functions.
◦ Likelihood function: The conditional distribution of R, given that s = sm is denoted by pm (r).
◦ The goal is to choose m̂, our estimate of the symbol that was sent, based on the likelihood
functions, pm (r), m = 1, 2, . . . , M . What criterion should we use for choosing m̂?
◦ (Average) Probability of symbol error
Pe = P{m̂(R) 6= msent }
Assuming that the prior probability of seeing symbol m is ν m , we can write
X
M
Pe = νm Pe,m
m=1
where
Pe,m = P ({m̂(R) 6= m}|{m sent})
c
V.V. Veeravalli, 2006 2
◦ Minimum Probability of Error (MPE) Detection
As we showed in class, the MPE detector chooses m̂ as:
m̂MPE (r) = arg max νm pm (r)
m
We also showed using Bayes rule that the MPE detector also maximizes the a posteriori probability
that symbol m was sent given that r is received, i.e.,
m̂MPE (r) = m̂MAP (r)
1
Finally, if the symbols are equally likely, i.e., ν m = M, m = 1, 2, . . . , M , then
m̂MPE (r) = arg max νm pm (r) = arg max pm (r) = m̂ML (r)
m m
where m̂ML is the maximum likelihood (ML) decision rule. Note that we typically assume that
the symbols are equally likely.
c
V.V. Veeravalli, 2006 3