0% found this document useful (0 votes)
33 views5 pages

Week01 Workshop Soln

Uploaded by

Loh Jun Xian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views5 pages

Week01 Workshop Soln

Uploaded by

Loh Jun Xian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Week 1: Probability revision

Workshop aims
• Probability revision

• To look at some properties of Gaussian random variables

Recommended questions: 1.1, 2.2, 2.3

1 Random variables
1.1 Let X : Ω → R and Y : Ω → R be two random variables with probability density functions fX : R → R and
fY : R → R respectively, and with joint probability density function fX,Y : R2 → R. Show that, for any α, β ∈ R,
we have
E (αX + βY ) = αE (X) + βE (Y ) .

Solution This follows from the linearity of integration. Specifically,


Z ∞Z ∞
E (αX + βY ) = (αx + βy) fX,Y (x, y) dxdy
−∞ −∞
Z ∞ Z ∞  Z ∞ Z ∞ 
=α x fX,Y (x, y) dy dx + β y fX,Y (x, y) dx dy.
−∞ −∞ −∞ −∞

Now we use1
Z ∞
fX,Y (x, y) dy = fX (x) ,
Z −∞

fX,Y (x, y) dx = fY (y) ,
−∞

leading to
Z ∞  Z ∞ 
E (αX + βY ) = α xfX (x) dx + β yfY (y) dy
−∞ −∞
= αE (X) + βE (Y ) .

1.2 Now let X and Y be independent. Show that

V (X + Y ) = V (X) + V (Y ) .
1
To see these identities, consider e.g. any expectation E (g (X)), taken over both X and Y . This allows us to identify the PDF for
X, as used here.

1
Solution We have
h i
V (X + Y ) = E (X + Y − E (X + Y ))2
h i
= E (X + Y − E (X) − E (Y ))2
h i
= E (X − E (X))2 + (Y − E (Y ))2 + 2 (X − E (X)) (Y − E (Y ))
h i h i
= E (X − E (X))2 + E (Y − E (Y ))2 + 2E [XY − XE (Y ) − Y E (X) + E (X) E (Y )]
= V (X) + V (Y ) + 2E (XY ) − 2E (X) E (Y )
= V (X) + V (Y ) + 2E (X) E (Y ) − 2E (X) E (Y )
= V (X) + V (Y ) .

Here E (XY ) = E (X) E (Y ) has been used, which follows as X and Y are independent.

2 Gaussian distribution
2.1 Consider a Gaussian random variable X : Ω → R with X ∼ N µ, σ 2 , where σ > 0. Show, using the


probability density function for X, that


E (X) = µ,
and
V (X) = σ 2 .

Hint: The probability density function for X is


1 (x−µ)2
fX (x) = √ e− 2σ 2 .
2πσ 2
R∞
Considering µ = 0, σ = 1, and noting that −∞ fX (x) dx = 1, leads to the standard integral
Z ∞ x2 √
e− 2 dx = 2π.
−∞

Solution First
Z ∞
E (X − µ) = (x − µ) fX (x) dx
−∞
Z ∞
1 (x−µ)2
=√ (x − µ) e− 2σ2 dx.
2πσ 2 −∞
With the substitution
x−µ
y=
,
σ
we obtain Z ∞
σ 1 2
E (X − µ) = √ ye− 2 y dy.
2π −∞
The integral is zero as it is the integral of the product of odd and even functions, specifically
Z ∞ Z 0 Z ∞
− 12 y 2 − 12 y 2 1 2
ye dy = ye dy + ye− 2 y dy
−∞ −∞ 0
Z 0 1 2
Z 0 1 2
= ye− 2 y dy − (−y) e− 2 (−y) d (−y)
−∞ −∞
= 0.

Hence
E (X − µ) = 0,

2
leading to
E (X) = µ.

Next
h i
V (X) = E (X − µ)2
Z ∞
1 (x−µ)2
=√ (x − µ)2 e− 2σ2 dx.
2πσ 2 −∞
With the substitution
x−µ
y= ,
σ
we obtain
Z ∞
σ2 1 2
V (X) = √ y 2 e− 2 y dy
2π −∞
Z ∞ 
σ2 1 2

= −√ y −ye− 2 y dy
2π −∞
Z ∞
σ 2 h − 1 y2 i∞ σ2 1 2
= −√ ye 2 +√ e− 2 y dy,
2π −∞ 2π −∞
where the final equality follows by integrating by parts. Using the given standard integral leads to
V (X) = σ 2 .

The results can also be demonstrated in Python using SymPy, e.g.


import sympy as sp

x = sp . Symbol ( " x " , real = True )


mu = sp . Symbol ( " mu " , real = True )
sigma = sp . Symbol ( " sigma " , real = True , positive = True )

f_X = ((1 / sp . sqrt (2 * sp . pi * ( sigma ** 2) ) )


* sp . exp ( -(( x - mu ) ** 2) / (2 * ( sigma ** 2) ) ) )

E_X = sp . integrate ( x * f_X , (x , - sp . oo , sp . oo ) )


V_X = sp . integrate ((( x - E_X ) ** 2) * f_X , (x , - sp . oo , sp . oo ) )

print ( f " { E_X =} " )


print ( f " { V_X =} " )

which displays
E_X = mu
V_X = sigma **2

2.2 Given an a ∈ R, show that


1 2 2
E eaX = eaµ+ 2 a σ .


Solution
Z ∞
1 (x−µ)2
E eaX = √ eax e− 2σ2 dx

2πσ 2 −∞
Z ∞
1 −2axσ 2 +x2 −2µx+µ2
=√ e− 2σ 2 dx
2πσ 2 −∞
2
1
Z ∞

(x−µ−aσ2 ) −2aµσ2 −a2 σ4
=√ e 2σ 2 dx
2πσ 2 −∞
2
1
Z ∞ (x−µ−aσ2 )
aµ+ 21 a2 σ 2 −
=e √ e 2σ 2 dx
2πσ 2 −∞
1 2 2
= eaµ+ 2 a σ
.

3
2.3 Let g : R → R be a smooth bounded function. Show that

E [Xg (X)] = σ 2 E g 0 (X) + µE (g (X)) .


 

Solution
Z ∞ (x−µ)2
1
E (Xg (X)) = √ xg (x) e− 2σ2 dx
2πσ 2 −∞
Z ∞ 2
Z ∞
(x−µ)2
 
1 x − µ − (x−µ) 1
= −σ 2
√ g (x) − 2 e 2σ 2 dx + µ √ g (x) e− 2σ2 dx
2πσ 2 −∞ σ 2πσ 2 −∞
2
∞ Z ∞ Z ∞
(x−µ)2 (x−µ)2

1 −
(x−µ) 1 − 1
2
= −σ √ g (x) e 2σ 2 +σ √2 0
g (x) e 2σ dx + µ √
2 g (x) e− 2σ2 dx
2πσ 2 −∞ 2πσ 2 −∞ 2πσ 2 −∞
2
 0 
= σ E g (X) + µE (g (X)) .

3 Independent Gaussian random variables


Let X : Ω → R and Y : Ω → R be jointly Gaussian random variables. That is, Q = (X, Y )T is a multi-variate
Gaussian random variable. Show that if

E [(X − E (X)) (Y − E (Y ))] = 0

then X and Y are independent.


Hint: We have Q ∼ N (m, S), where m ∈ R2 and where S is a real symmetric non-negative definite matrix. You
may assume that S is invertible – so that it is symmetric positive definite. The probability density function for Q
is then
1 1 T −1
fQ (q) = q e− 2 (q−m) S (q−m) ,
(2π)2 det S

where q = (x, y)T .


Solution Let
 
mx
m= ,
my
 
Sxx Sxy
S= .
Sxy Syy

We have
E [(X − E (X)) (Y − E (Y ))] = Sxy .

For completeness this is derived from the probability density function. First2
Z ∞Z ∞
1 1 T −1
E [(X − E (X)) (Y − E (Y ))] = q (x − mx ) (y − my ) e− 2 (q−m) S (q−m) dxdy.
(2π)2 det S −∞ −∞

Define  
1 Sxx √ 0
L= √ ,
Sxx Sxy det S
noting that Sxx 6= 0 follows from S being symmetric positive definite. After some algebra we have

LLT = S,
2
E (X) = µx and E (Y ) = µy can also be derived from the PDF – e.g. via an approach similar to that used in question 2.1.

4

from which it follows that det L = det S.3 With the substitution4
    
x̃ −1 x
=L −m ,
ỹ y

we obtain (being careful to include the Jacobian determinant factor)

E [(X − E (X)) (Y − E (Y ))]


1
Z ∞Z ∞  √  1 2 2
= x̃ x̃Sxy + ỹ det S e− 2 (x̃ +ỹ ) dx̃dỹ
2π −∞ −∞

Sxy ∞ 2 − 1 x̃2
Z ∞
det S ∞ − 1 x̃2
Z Z Z ∞
1 2 1 2

= x̃ e 2 dx̃ ỹ
e 2 dỹ + x̃e 2 dx̃ ỹe− 2 ỹ dỹ
2π −∞ −∞ 2π −∞ −∞
= Sxy ,

where identities as in the solution to question 2.1 have been used.


Hence, since
E [(X − E (X)) (Y − E (Y ))] = 0,
we have Sxy = 0. Hence

1 1 T −1
fQ (q) = q e− 2 (q−m) S (q−m)
(2π)2 det S
2 2
1 (x−mx ) (y−my )
− − 2S
=q e 2Sxx yy
2
(2π) Sxx Syy
!
(x−mx )2 (y−my )2
 
1 1 −
= √ e− 2Sxx p e 2Syy
2πSxx 2πSyy
= fX (x) fY (y) ,

where fX and fY are the probability density functions for X and Y respectively.5 Hence X and Y are independent.

3
Technical note: L is the Cholesky factor of the covariance matrix S.
4
This is a higher dimensional generalization of the substitution used in the solution to question 2.1.
5
To see that these are the PDFs for X and Y , consider the expectations of functions depending only on X or Y , E (g (X)) and
E (g (Y )).

You might also like