Chapter 9
The General Gaussian Problem
9.1 (a) We first diagonalize the matrix C.
CI =
= 1/ 2
1 1/ 2
=0 1
2 = 1.5
1/ 2 1
1 1 / 2 a 1 a
2
= a=
C 1 = 1 1
2
1 / 2 1 b 2 b
and b =
2
2
2 /2
2 1
Therefore, 1 =
=
1 .
2
2
/
2
2 / 2
2 1
C 2 = 2 2 2 =
=
1
2
2
/
2
We
form
M 1 =
the
modal
matrix
M = [1
2 ] M =
2
2
1 1
1 1
2 1 1
.
2 1 1
Therefore, the observation vector y in the new coordinate system is
127
and
Signal detection and estimation
128
2
y = My = 2
2
2 y1 y = 2 ( y + y ) and
1
1
2
2
2 y2
y1 =
2
( y 2 y1 )
2
The mean vector m1 is
m11
m =
12
2
2
2
2
2 m11 m = 2 (m + m ) and m = 2 ( m m )
11
11
12
12
12
11
2
2
2 m12
2
m = m1 m 0 = m1 . The sufficient statistic is
y1 m12
y 2
m k y k m11
=
+
k
1/ 2
1.5
k =1
2
T ( y ) =
1
= (m11 + m12 )( y1 + y 2 ) + (m12 m11 )( y 2 y1 )
3
H1
or T ( y )
1
>
= + m1T C 1 m1 .
< 1
2
H0
1 0.1 1 = 0.9
(b) C =
0.1 1 2 = 1.1
Then, 1 =
2
2
2
M =
2
2
2
2
2 =
2 ,
2
2
2 ,
2
2
2
2
2
2
2
2
2
The General Gaussian Problem
and y = My =
=
m11
2 y1
2 y2
2
2
2
2
2
( y1 + y 2 )
2
2
y1 =
( y 2 y1 )
2
y1 =
2
2
=
(m11 + m12 ) , m12
(m12 m11 ) and m = m1
2
2
The sufficient statistic is
y1 m12
y 2
m k y k m11
=
+
k
0.9
1.1
k =1
2
T ( y ) =
= 0.55(m11 + m12 )( y1 + y 2 ) + 0.45( m12 m11 )( y 2 y1 )
1 = 0.1
1 0.9
(c) C =
= 1.9
0
.
9
1
2
Then, 1 = 2
2
2 =
2
2
2
y1 m12
y 2
m k y k m11
=
+
k
0.1
1.9
k =1
2
T ( y ) =
= 5(m11 + m12 )( y1 + y 2 ) + 0.26( m12 m11 )( y 2 y1 )
1 = 0.47
1 0.9
9.2 C =
= 2.53
0
.
9
2
2
0.86
1 =
0.51
0.86 0.51
M =
0.51 0.86
0.51
2 =
,
0.86
0.86 0.51
M 1 =
0.51 0.86
129
Signal detection and estimation
130
y 0.86 0.51 y1
y = My 1 =
y1 = 0.86 y1 + 0.51 y 2
y 2 0.51 0.86 y 2
y 2 = 0.86 y 2 0.51 y1
Then,
= 0.86m11 + 0.51m12
m1 = Mm1 m11
T ( y ) =
and
= 0.51m11 + 0.86m12
m12
and
(0.86m11 + 0.51m12 )(0.86 y1 + 0.51 y 2 )
0.47
(.0.51m11 + 0.86m12 )(0.51 y1 + 0.86 y 2 )
+
2.56
T ( y ) = (1.83m11 + 1.09m12 )(1.83 y1 + 1.09 y 2 )
+ (0.2m11 + 0.34m12 )(0.2 y1 + 0.34 y 2 )
9.3 Noise N (0, 2n )
(a) E[Yk | H j ] = 0 ,
k = 1, 2
j = 0, 1
m1 = m 0 = 0
2
H 0 : Yk = N k C n = C 0 = 2n I = n
0
2n
2 + 2n
H 1 : Yk = S k + N k C 1 = C s + C n = s
0
2s
+ 2n
, since C s = 2s I .
From Equation (9.64), the LRT reduces to the following decision rule
T ( y) =
2s
2n ( 2s
H1
2
+ 2n ) k =1
where 2 = 2ln + (ln C 1 ln C 0 )
2
y k2
>
< 2
H0
The General Gaussian Problem
131
H1
2 ( 2 + 2 )
>
3 = n s 2 n 2
<
or, T ( y ) = y k2
k =1
H0
(b) P1 = P0 =
2 = 2 ln
2s + n2
2n
1
2
and minimum probability of error criterion = 1 ,
and 3 = 2
2n ( 2s + 2n )
2s
ln
2s + 2n
2n
The density functions of the sufficient statistics under H1 and H0, from Equation
(9.71) and (9.72), are
1 t / 212
e
f T H1 (t H 1 ) = 212
, t>0
, otherwise
and
1 t / 2 02
e
f T H 0 (t H 0 ) = 2 02
, t>0
, otherwise
where 12 = 2s + 2n and 02 = 2n . Consequently,
PF =
2 2n 3
t / 2 2n
dt = e 3 / 2 n
and
PD =
212 3
t / 212
dt = e 3 / 21 = e 3 / 2( n + s )
Signal detection and estimation
132
2n
0
9.4 (a) K = 4 C 0 = C n =
0
0
2s
C1 = C s + C n =
0
2n
0
0
+ 2n
0
0
0
2n
0
0
2n
0
0
2s
+ 2n
0
0
2s + 2n
0
2s + 2n
where C s = 2s I . Hence,
T ( y) =
or, T ( y ) =
2s
n2 ( 2s + n2 ) k =1
y k2
4
2s
y k2
2
2
2
n ( s + n ) k =1
H1
>
< 2
H0
H1
2 ( 2 + 2 )
>
3 = n s 2 n 2
<
s
H0
The statistic is T ( y ) = y k2 .
k =1
(b) 2 =
4 ln 2s + 2n
2n
and 3 =
n2 ( 2s + n2 )
2s
ln
2s + n2
2n
The conditional density functions are then
1
t / 2 02
4 te
f T H 0 (t H 0 ) = 8 0
and
, t>0
, otherwise
The General Gaussian Problem
1
t / 2 12
4 te
f T H1 (t H 1 ) = 81
133
, t >0
, otherwise
where 02 = 2n and 12 = 2s + 2n . The probability of false alarm and detection
are then
8 4
PF =
PD =
8 4 te
te t / 2 n dt =
t / 212
dt =
1
1 + 32
2
2 n
/ 2 2
e 3 n
1
1+ 3
2 212
/ 2 2
e 3 1
9.5 ROC of Problem 9.3 with SNR = 1 , SNR = 2 and SNR = 10 .
1
0.9
0.8
0.7
0.6
PD
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.6
0.4
PF
0.8
Signal detection and estimation
134
2
9.6 C s = s
0
2n
0
=
,
C
n
2 2s
0
2n
From (9.78), the LRT is
T ( y) =
1
n2
H1
>
y k2
2
2
<
+ n
H0
2sk
2
k =1 sk
or,
( 2 2s
+ 2n ) y1
C n
9.7 (a) C 0 =
0
+ 2( 2s
H1
2 ( 2 + 2n )(2 2s + 2n )
>
2 n s
<
2s
H0
+ 2n ) y 2
0
C s + C n
and
C + C n
C1 = s
0
0
C s
2 0
1 0
and C s =
where C n =
0 2
0 1
1
0
C0 =
0
0
1
0
0
0
0
3
0
0
0
0
and
3
0
C1 =
0
From (9.88), the optimum test reduces to
H1
>
T ( y ) = y k2 y k2
3
<
k =1
k =3
2
H0
where 3 is
0
3
0
0
0
0
1
0
0
0
0
The General Gaussian Problem
3 =
n2 ( 2s + n2 )
2s
1
2s = 2
2 and 2 = 2ln + (ln C 1 ln C 0 ) ,
2
2n = 1
(b) 3 = 0 The test reduces to
H1
>
y k2 <
k =1
2
y k2
k =3
H0
From (9.94), (9.95) and (9.96), the probability of error is
t1
P ( | H 0 ) = f T1T0 (t1 , t 0 , H 0 )dt 0 dt1
00
P ( ) =
P ( | H ) =
1
f T1T0 (t1 , t 0 , H 1 )dt 0 dt1
0 t1
where, f T1 (t1 ) =
1
1 t1 / 6
e
and f T0 (t 0 ) = e t0 / 2 . Therefore,
18
2
1
1
1
dt1 e t1 / 6 e t0 / 2 dt 0 = .
36 0
4
0
P ( ) =
1 0.9 0.5
9.8 (a) C = 0.9 1 0.1
0.5 0.1 1
1 0.9 0.5
1 = 0.0105
C I = 0.9 1 0.1 = 0 2 = 0.9153
0.5 0.1 1
3 = 2.0741
0.7204
C1 = 1 1 1 = 0.6249 ,
0.3009
Similarly,
135
Signal detection and estimation
136
0.0519
0.6916
2 = 0.4812 , and 3 = 0.6148
0.8750
0.3792
The modal matrix is
0.7204 0.0519 0.6916
M = 0.6249 0.4812 0.6148
0.3009 0.8750 0.3792
and
y = My y1 = 0.72 y1 0.052 y 2 + 0.69 y 3
y 2 = 0.625 y1 0.48 y 2 + 0.615 y 3
y 3 = 0.3 y1 + 0.875 y 2 + 0.38 y 3
Similarly, m1 = Mm1 and then we use
3 m y
m k y k
= k k
k
k =1
k =1 k
3
T ( y) =
1 0.8 0.6 0.2
0.8 1 0.8 0.6
(b) C =
0.6 0.8 1 0.8
0.2 0.6 0.8 1
In this case, 1 = 0.1394 , 2 = 0.0682 , 3 = 0.8606 and 4 = 2.9318
whereas,
0.5499
0.2049
0.6768
, = 0.4445
1 =
2
0.4445
0.6768
0.5499
0.2049
and the modal matrix is
0.6768
0.2049
, 3 =
0.2049
0.6768
0.4445
0.5499
and 4 =
0.5499
0.4445
The General Gaussian Problem
0.55 0.68
0.2
0.68 0.44 0.2
M =
0.68 0.44
0.2
0.55
0.68
0.2
0.44
0.55
0.55
0.44
137