0% found this document useful (0 votes)
38 views14 pages

Quiz2 Solutions

Uploaded by

namitsbox
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views14 pages

Quiz2 Solutions

Uploaded by

namitsbox
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

ST112: Probability B Quiz 2 solutions

1. Let X and Y be jointly continuous random variables with joint density



 1 · (x + 3y) · e−(x+y) if x > 0 and y > 0;
4
fX,Y (x, y) =
0 otherwise.

Determine P(X > Y ).


Solution. Note that {X > Y } = {(X, Y ) ∈ A}, where A := {(x, y) ∈ R2 : x > y}. Hence,
ZZ Z ∞Z x
1
P(X > Y ) = fX,Y (x, y) dy dx = (x + 3y)e−(x+y) dy dx
A 0 0 4
Z ∞  Z x Z x 
1 −x −y −y
= e x e dy + 3 ye dy dx
4 0 0 0
  −y x x
1 ∞ −x
Z x
e−y
Z  
e −y
= e x +3 y· +3 e dy dx
4 0 −1 0 −1 0 0

1 ∞ −x
Z
−(4x + 3)e−x + (x + 3) dx

= e
4 0
 Z ∞ Z ∞ 
1 −2x −x
= − (4x + 3)e dx + (x + 3)e dx
4 0 0
∞ Z ∞ ∞ Z ∞
e−2x e−x
   
1 −2x −x
= − (4x + 3) · −2 e dx + (x + 3) + e dx
4 −2 0 0 −1 0 0
  −2x ∞  −x ∞ 
1 3 e e
= − −2 +3+
4 2 −2 0 −1 0
 
1 3 3
= − − 1 + 3 + 1 = = 0.375.
4 2 8

1
2. Let X and Y be independent random variables. Asume that X has doubly exponential
distribution with parameter λ > 0, that is,
λ −λ|x|
fX (x) = ·e , x ∈ R,
2
and Y has Gamma distribution with parameters 2 and λ > 0, that is,

λ2 · y · e−λy if y > 0;
fY (y) =
0 otherwise.

Mark all the correct answers.

(a) X + Y has Gamma distribution with parameters 3 and λ.


λ λ3 2 −λz
(b) For z > 0, fX+Y (z) = 8
+ 4
z e .
λ3 2 −λz
(c) For z > 0, fX+Y (z) = 2
z e .
λ λz
(d) For z < 0, fX+Y (z) = 8
e .
(e) For z < 0, fX+Y (z) = 0.

Solution. Since X and Y are independent, to compute the density of X + Y we can use
the convolution formula
Z ∞
fX+Y (z) = fX (x) · fY (z − x) dx
−∞
Z z
λ −λ|x| 2
= ·e · λ · (z − x) · e−λ(z−x) dx.
−∞ 2

For z 6 0 we have
z
λ3
Z
fX+Y (z) = · eλx · (z − x) · e−λz+λx dx
2 −∞
z
λ3 −λz
Z
= ·e · (z − x) · e2λx dx
2 −∞
z Z z 2λx
λ3 −λz e2λx
 
e
= ·e · (z − x) · + dx
2 2λ −∞ −∞ 2λ
 2λx z
λ3 −λz e
= ·e ·
2 (2λ)2 −∞

λ3 eλz λ
= · 2 = · eλz .
2 4λ 8
For z > 0, we split the integral as

λ3 0 λx λ3 z
Z Z
−λ(z−x)
fX+Y (z) = e · (z − x) · e dx + (z − x) · e−λz dx.
2 −∞ 2 0

2
Now, the first integral gives

λ3 0 λx
Z
e · (z − x) · e−λ(z−x) dx
2 −∞
 Z 0 Z 0
λ3 −λz

2λx 2λx
= ·e · z e dx − x · e dx
2 −∞ −∞
 2λx 0 0 Z 0 2λx !
λ3 −λz e2λx

e e
= ·e · z − x· + dx
2 2λ −∞ 2λ −∞ −∞ 2λ

 2λx 0 !
λ3 −λz z e
= ·e · +
2 2λ (2λ)2 −∞

λ3 −λz λ3 −λz
   
z 1 z 1
= ·e · + = ·e · + .
2 2λ 4λ2 4 λ 2λ2

The second integral is


Z z z
λ3 −λz λ3 −λz (z − x)2 λ3 2 −λz

·e · (z − x) dx = ·e · − = ·z ·e .
2 0 2 2 0 4

Hence, for z > 0,


λ3 −λz
 
z 1 2
fX+Y (z) = ·e · + +z .
4 λ 2λ2
Correct alternatives: d
Grading: +10 points for alternative d, -2.5 points for each incorrect alternative marked
(grade cannot be lower than zero)

3
3. Let X and Y be independent discrete random variables, each with Geometric distribution
of parameter p ∈ (0, 1), that is, the probability mass functions pX and pY of X and Y
are 
(1 − p)k−1 · p if k ∈ N;
pX (k) = pY (k) =
0 otherwise.
Mark all correct answers.

(a) X + Y has Geometric distribution with parameter 1 − (1 − p)2 .


(b) min{X, Y } has Geometric distribution with parameter 1 − (1 − p)2 .
(c) P(X + Y = k) = (k − 1)p2 (1 − p)k−2 for k = 2, 3, . . .
(d) P(max{X, Y } 6 2) = p2 (2 − p)2 .
(e) None of the above.

Solution. Since X and Y are independent, we can apply the discrete convolution formula,
which gives
X
P(X + Y = k) = pX (j) · pY (k − j)
j

k−1
X
= (1 − p)j−1 · p · (1 − p)k−j−1 · p
j=1

k−1
X
2
=p · (1 − p)k−2
j=1

= p2 · (1 − p)k−2 · (k − 1).

Here we can already see that X + Y is not Geometric. We now compute

P(min{X, Y } > k) = P(X > k, Y > k)


= P(X > k) · P(Y > k) = P(X > k)2

!2
X
= (1 − p)j−1 · p
j=k


!2
X
= p2 · (1 − p)j−k+k−1
j=k


!2
X
= p2 · (1 − p)2(k−1) · (1 − p)`
`=0
 2
2 2(k−1) 1
= p · (1 − p) · = (1 − p)2(k−1) .
1 − (1 − p)

4
Hence,

P(min{X, Y } = k) = P(min{X, Y } > k) − P(min{X, Y } > k + 1)

= (1 − p)2(k−1) − (1 − p)2k

= (1 − p)2(k−1) · 1 − (1 − p)2
 

k−1 
= (1 − p)2 · 1 − (1 − p)2 .


Hence, min{X, Y } is Geometric with parameter 1 − (1 − p)2 .


Correct alternatives: b, c, d
Grading: 0 points if “None of the above” was marked. Marking (b), (c), (d) is worth
0.33 each, marking (a) gives a penalty of 0.33 (but grade cannot be negative)

5
4. Let X be a continuous random variable with log-normal distribution, that is,

 √1 · e− 12 (log x)2 if x > 0;
x 2π
fX (x) =
0 otherwise.

Determine the value of A = 2π · E[| log X|].
Solution. We have
√ √ Z ∞
| log x| − 1 (log x)2
A= 2π · E[|X|] = 2π · √ ·e 2 dx
0 x 2π
Z ∞
1 2
= |y| · e− 2 y dy
−∞
Z ∞ Z 0
− 12 y 2 1 2
= y·e dy − y · e− 2 y dy
0 −∞
Z ∞ Z ∞
1 2 1 2
= z · e− 2 z dz + z · e− 2 z dz
0 0
Z ∞
1 2
=2 z · e− 2 z dz
0
∞ ∞
e−w/2
Z 
= e−w/2 dw = = 2.
0 −1/2 0

6
5. Let X be a discrete random variable with Binomial distribution with parameters 4 and p ∈
(0, 1). Consider the random variable Y defined as

Y = cos( 12 πX).

Mark all the correct answers.


(a) Y has Bernoulli distribution with parameter p.
(b) Y is discrete uniform over the interval [−1, 1].
(c) E[Y ] = p2 (1 − p)2 (p2 + (1 − p)2 − 6).
(d) E[Y ] = 0.
(e) E[Y 2 ] = p2 (1 − p)2 (p2 + (1 − p)2 + 6).
(f) Var(Y ) = 12p2 (1 − p)2 .
Solution. Note that Im(X) = {0, 1, 2, 3, 4}, as X is Binomial with parameters 4 and p.
Hence,
• Y = 1 if and only if X = 0, 4;
• Y = 0 if and only if X = 1, 3;
• Y = −1 if and only if X = 2.
Therefore, Im(Y ) = {−1, 0, 1}, so Y is not Bernoulli. We compute

pY (1) = P(X = 0) + P(X = 4) = (1 − p)4 + p4 ;


pY (0) = P(X = 1) + P(X = 3) = 4(1 − p)3 p + 4p3 (1 − p);
pY (−1) = P(X = 2) = 6p2 (1 − p)2 .

Note that pY (0) = pY (−1) if and only if

2(1 − p)3 p + 2p3 (1 − p) + 3p2 (1 − p)2 = 0


⇔ p(1 − p)(2(1 − p)2 + 2p2 + 3p(1 − p)) = 0
⇔ p(1 − p)(2 − 4p + 2p2 + 2p2 + 3p − 3p2 ) = 0
⇔ p(1 − p)(2 − p2 − p) = 0
⇔ p = 0 or p = 1,

but both p = 0 and p = 1 are excluded. Hence, there is no value of p ∈ (0, 1) such that Y
is discrete uniform on [−1, 1].
Next, we compute

E[Y ] = −1 · pY (−1) + 0 · pY (0) + 1 · pY (1)


= −6p2 (1 − p)2 + p4 + (1 − p)4 ;
E[Y 2 ] = (−1)2 · pY (−1) + 0 · pY (0) + 12 · pY (1)
= 6p2 (1 − p)2 + p4 + (1 − p)4 ;
Var(Y ) = 6p2 (1 − p)2 + p4 + (1 − p)4 − (p4 + (1 − p)4 − 6p2 (1 − p)2 )2
= 4p(1 − p)[4p6 − 12p5 + 4p4 + 12p3 − 8p2 + 1].

7
Alternative (6) is not true for all values of p. To see this, note that

Var(Y ) − 12p2 (1 − p)2


= 4p(1 − p)[4p6 − 12p5 + 4p4 + 12p3 − 8p2 + 1] − 4p(1 − p)[3p(1 − p)]
= 4p(1 − p)[4p6 − 12p5 + 4p4 + 12p3 − 5p2 − 3p + 1],

and this is not zero for all p ∈ (0, 1).


Correct alternatives: None.
Grading: everyone who submitted the quiz was given full marks for this question, as
there was an issue with submitting the correct answer.

8
6. Let X be a continuous random variable with probability density function given by

 3 (1 − x2 ) if x ∈ (−1, 1);
4
fX (x) =
0 otherwise.

Compute the variance of X.


Solution. Note that for any k ∈ N we have
Z 1  k+1 1  k+3 1 !
k 3 k 2 3 x x
E[X ] = · x (1 − x ) dx = −
4 −1 4 k + 1 −1 k + 3 −1

3 2 − 2

4 k+1 k+3
if k is even;
=
0 otherwise.

Hence,
E[X] = 0
and    
2 32 2 2 3 10 − 6 1
Var(X) = E[X ] − E[X ] = − = = = 0.2.
4 3 5 4 15 5

9
7. Let X and Y be discrete random variables with joint mass function

 4 x2 (y + 2) if x = 1 , 1 and y = −1, 2;
25 2
pX,Y (x, y) =
0 otherwise.

Mark all the correct answers.

(a) X and Y are independent.


(b) X and Y are uncorrelated.
4 2
(c) pX (x) = 25
x for x = 12 , 1.
9
(d) E[X] = 10 .
(e) E[Y ] = 25 .
(f) E[XY ] = 63 50
.

Solution.

(a) Yes, they are independent since

pX,Y (x, y) = g(x) · h(y),


4 2
with for example g(x) = 25
x and h(y) = y + 2.
(b) Since they are independent, they are uncorrelated.
(c) We have
4 2 4 4 4
pX (x) = x · 1 + x2 · 4 = x2 · 5 = x2 .
25 25 25 5
(d) The expectation is
1 4 1 4 1 4 9
E[X] = · · +1· ·1= + = .
2 5 4 5 10 5 10

(e) We have
pX,Y (x, y) 1
pY (y) = = (y + 2).
pX (x) 5
Then,
1 1 1 8 7
E[Y ] = − (−1 + 2) + 2 · (2 + 2) = − + = .
5 5 5 5 5
(f) Since X and Y are independent,
63
E[XY ] = E[X] · E[Y ] = .
50
Correct alternatives: a, b, d, f
Grading: +2.5 for marking each correct alternative, -2.5 for marking each incorrect al-
ternative (but grade cannot be negative).

10
8. Let X and Y be jointly continuous random variables with joint density

 1 (x + y) if x ∈ (0, 1) and y ∈ (0, 2);
3
fX,Y (x, y) =
0 otherwise.

Compute E[sin(πXY )].


Solution. We have
Z ∞ Z ∞
E[sin(πXY )] = sin(πxy) · fX,Y (x, y) dx dy
−∞ −∞
Z 2Z 1
1
= · sin(πxy) · (x + y) dxdy
3 0 0
Z 1 Z 2 Z 2 Z 1 
1
= x sin(πxy) dy dx + y sin(πxy) dx dy
3 0 0 0 0
Z 1  2 Z 2  1 !
1 cos(πxy) cos(πxy)
= x − dy dx + y − dy
3 0 πx 0 0 πy 0
Z 1 Z 2 
1 1 1
= (1 − cos(2πx)) dx + (1 − cos(πy)) dy
3 0 π 0 π
 1  2 !
1 1 sin(2πx) 2 sin(πy) 1
= − + − = = 0.31 . . .
3 π 2π 0 π π 0 π

11
9. Let X1 , . . . , X10 be independent discrete random variables such that for all n,

1


 16
if k = 0;


 1 if k = 1;

4
pXn (k) =
1


 2
if k = 2;


 3 if k = 3.

16

Consider the random variable


10
X
Y := Xn .
n=1

Compute E[Y ].
Solution. We have, for each n,
1 1 1 3 1 9 29
E[Xn ] = 0 · +1· +2· +3· = +1+ = .
16 4 2 16 4 16 16
Then, " #
10 10
X X 29 290
E[Y ] = E Xn = E[Xn ] = 10 · = = 18.125
n=1 n=1
16 16
.

12
10. Let X be a Gaussian random variable with mean 0 and variance 4, that is,
1 2
fX (x) = √ · e−x /8 , x ∈ R.

Let Y = X 2 .
Recall that if Z is a Gaussian random variable with mean µ and variance σ 2 then
Z ∞ Z ∞
1 (z−µ)2
1= fZ (z) dz = √ · e− 2σ2 dz.
−∞ 2πσ 2 −∞
Mark all the correct answers.

(a) The moment-generating function MY (t) of Y is well defined and finite for all t ∈ R.
(b) The moment-generating function MY (t) of Y is well defined and finite for t < 81 .
(c) E[Y ] = 4.
(d) Var(X) = 16.
(e) None of the above.

Solution. Notice that


Z ∞ Z ∞
1 2 1 x2 1
tY
MY (t) = E[e ] = E[e tX 2
]= √ · tx2
e ·e− x8
dx = √ · e− 2 ( 4 −2t) dx.
8π −∞ 8π −∞

The previous integral converges if and only if


1 1
− 2t > 0 ⇔ t< .
4 8
1
Hence, for t < 8
we have
Z ∞
1 x2 1−8t
MY (t) = √ · e− x ( 4 ) dx
8π −∞
Z ∞ ( )
1 1 x2
=√ ·√ · exp − 4
 dx
4 2π −∞ 2 1−8t
r Z ∞ ( )
1 4 1 x2
= · · q · exp − 4
 dx.
2 1 − 8t −∞ 2π 4  2 1−8t
1−8t

Now note that the expression inside the integral is the density function of a Gaussian
4
with mean 0 and variance σ 2 = 1−8t , so the integral equals 1. Therefore,

1
MY (t) = √ .
1 − 8t
To determine the moments, we compute the derivatives of MY :
1 8 4
MY0 (t) = · 3/2
= ,
2 (1 − 8t) (1 − 8t)3/2
3 8 58
MY00 (t) = 4 · · 5/2
= .
2 (1 − 8t) (1 − 8t)5/2

13
Then,
E[Y ] = MY0 (0) = 4, E[Y 2 ] = MY00 (0) = 58,
so
Var(Y ) = E[Y 2 ] − E[Y ]2 = 58 − 16 = 42.
Correct alternatives: b, c
Grading: 0 points if “None of the above” was marked. Marking (b) and (c) is worth 5
points each, marking (a) and (d) is worth -5 points each (but grade cannot be negative).

14

You might also like