MA2506 / MA2510 – Probability and Statistics
Homework assignment #1 – solutions
1. (i) Consider two events A and B, with P(A) = 0.3, P(B) = 1. Compute P(A∩B), P(Ac ∩B),
and P(A ∩ B c ) (where we denote by Ac = Ω \ A the complement of an event A).
First, it follows from A ∩ B c ⊆ B c that
0 ≤ P(A ∩ B c ) ≤ P(B c ) = 1 − P(B) = 0.
We thus have P(A ∩ B c ) = 0.
Second, we can write A as the disjoint union A = (A ∩ B) ∪ (A ∩ B c ), which implies
P(A) = P(A ∩ B) + P(A ∩ B c ).
We deduce
P(A ∩ B) = P(A) − P(A ∩ B c ) = P(A) = 0.3.
Finally, for similar reasons we have
P(Ac ∩ B) = P(Ac ) = 1 − P(A) = 0.7.
(ii) We now consider three independent events A, B, and C. Using the definition of inde-
pendence, show that the two events Ac and B c ∪ C c are independent.
In order to prove the independence of these two events, we need to check the relation
P(Ac ∩ (B c ∪ C c )) = P(Ac ) · P(B c ∪ C c ),
and we have the following four relations at our disposal, coming from the independence
of the three events A, B, and C:
1. P(A ∩ B) = P(A)P(B),
2. P(B ∩ C) = P(B)P(C),
3. P(A ∩ C) = P(A)P(C),
4. P(A ∩ B ∩ C) = P(A)P(B)P(C).
We can for example proceed as follows. First,
P(Ac ) = P(Ac ∩ (B c ∪ C c )) + P(Ac ∩ (B c ∪ C c )c )
(since we have a disjoint union), so
P(Ac ∩ (B c ∪ C c )) = P(Ac ) − P(Ac ∩ (B c ∪ C c )c )
= P(Ac ) − P(Ac ∩ B ∩ C).
Then (for similar reasons),
P(Ac ∩ B ∩ C) + P(A ∩ B ∩ C) = P(B ∩ C)
1
so
P(Ac ∩ B ∩ C) = P(B ∩ C) − P(A ∩ B ∩ C)
= P(B)P(C) − P(A)P(B)P(C)
= (1 − P(A))P(B)P(C)
= P(Ac )P(B)P(C)
(where we used relations 2. and 4. to go from the first line to the second one). Finally,
we write
P(Ac ∩ (B c ∪ C c )) = P(Ac ) − P(Ac ∩ B ∩ C)
= P(Ac ) − P(Ac )P(B)P(C)
= P(Ac )(1 − P(B)P(C))
= P(Ac )P(B c ∪ C c )
(since P(B)P(C) = P(B ∩ C) = 1 − P((B ∩ C)c ) = 1 − P(B c ∪ C c )).
2. We consider the electrical circuit set up as in the figure below, with 6 elements. We denote by
Ei the event that the ith element does not work (1 ≤ i ≤ 6), and we assume that the events
(Ei )1≤i≤6 are independent. We also know that P(Ei ) = 13 for i = 1, 2, 3, P(Ei ) = 12 for i = 4, 5,
and P(E6 ) = 41 . We say that the circuit works if its left and right ends are connected by a
“path” using only elements which work (for example, the circuit works if elements 1 and 2 both
work, but it does not work if 1 and 3 both do not work). Compute the probability of the event
E that the circuit does not work.
1 2
3 6
We have
P(E) = P((E1 ∪ E2 ) ∩ (E3 ∪ (E4 ∩ E5 ) ∪ E6 )) = P(E1 ∪ E2 )P(E3 ∪ (E4 ∩ E5 ) ∪ E6 )
(using independence of the events for the second equality). We then compute separately the
two factors in this product. On the one hand,
1 1 1 5
P(E1 ∪ E2 ) = P(E1 ) + P(E2 ) − P(E1 ∩ E2 ) = + − = .
3 3 9 9
On the other hand,
P(E3 ∪ (E4 ∩ E5 ) ∪ E6 ) = P(E3 ) + P(E4 ∩ E5 ) + P(E6 ) − P(E3 ∩ E4 ∩ E5 ) − P(E3 ∩ E6 )
− P(E4 ∩ E5 ∩ E6 ) + P(E3 ∩ E4 ∩ E5 ∩ E6 )
1 1 1 1 1 1 1 5
= + + − − − + = .
3 4 4 12 12 16 48 8
2
Note that we could also compute the probability of the complement, which involves simpler
calculations:
P(E3 ∪ (E4 ∩ E5 ) ∪ E6 ) = 1 − P((E3 ∪ (E4 ∩ E5 ) ∪ E6 )c )
= 1 − P(E3c ∩ (E4 ∩ E5 )c ∩ E6c )
= 1 − P(E3c )P((E4 ∩ E5 )c )P(E6c ),
and we have P(E3c ) = 1 − P(E3 ) = 23 , P(E6c ) = 1 − P(E6 ) = 3
4, and P((E4 ∩ E5 )c ) =
1 − P(E4 ∩ E5 ) = 1 − 21 · 12 = 43 , so we find again
2 3 3 3 5
P(E3 ∪ (E4 ∩ E5 ) ∪ E6 ) = 1 − · · =1− = .
3 4 4 8 8
Finally, we obtain
5 5 25
P(E) = P(E1 ∪ E2 )P(E3 ∪ (E4 ∩ E5 ) ∪ E6 ) = · = .
9 8 72
3. We can perform a test to determine if a given electronic component has a defect, which oc-
curs with a probability 0.001. If the component is defective, the test detects it correctly with
probability 0.99, while if the component is not defective, the test wrongly detects a defect with
probability 0.01.
Let us denote by D the event that the component is defective, so that P(D) = 0.001. We also
denote by T the event that the test is positive, i.e. it says that the component has a defect.
We know that P(T |D) = 0.99, and P(T |Dc ) = 0.01.
(i) If the test says that a component is defective, find the probability that this component
indeed has a defect.
We want to determine P(D|T ), which can be done by using Bayes’ formula:
P(T |D)P(D)
P(D|T ) =
P(T |D)P(D) + P(T |Dc )P(Dc )
(0.99) · (0.001)
=
(0.99) · (0.001) + (0.01) · (0.999)
' 0.09 . . .
(ii) If the test says that a component is not defective, find the probability that this component
indeed has no defect.
We use again Bayes’ formula:
P(T c |Dc )P(Dc )
P(Dc |T c ) =
P(T c |Dc )P(Dc ) + P(T c |D)P(D)
(0.99) · (0.999)
=
(0.99) · (0.999) + (0.01) · (0.001)
' 0.99 . . .
3
4. Consider n independent random variables
PnX1 , . . . , Xn , each having density function fXi (x) =
3e 1[0,+∞) (x) (1 ≤ i ≤ n). Let Sn = i=1 Xi .
−3x
(i) Compute E[Sn ] and Var(Sn ).
For E[Sn ], we use the linearity property of expectation:
n
X n
E[Sn ] = E[Xi ] = nE[X1 ] = .
3
i=1
For Var(Sn ), we use the property that “the variance of the sum is the sum of the vari-
ances”, since the random variables (Xi )1≤i≤n are assumed to be independent:
n
X n
Var(Sn ) = Var(Xi ) = nVar(X1 ) = .
9
i=1
Here, we used the formulas seen in class for the expectation and the variance of a random
variable with exponential distribution (here, with parameter 3). Let us recall quickly how
to obtain them.
– We can compute E[Xi ] as follows, using the definition of the expectation for a con-
tinuous random variable, and then an integration by parts:
Z +∞
E[Xi ] = xfXi (x)dx
−∞
Z+∞
= x3e−3x 1[0,+∞) (x)dx
−∞
Z +∞
xd e−3x
=−
0
Z +∞
+∞
= −xe−3x + e−3x dx
0 0
1 −3x +∞
=− e
3 0
1
= .
3
– A similar computation (but now, with two integrations by parts) yields E[Xi2 ] = 29 ,
2
so Var(Xi ) = E[Xi2 ] − (E[Xi ])2 = 29 − 13 = 19 .
(ii) Let Y = min(X1 , . . . , Xn ): compute the cumulative distribution function of Y , and then
its density. What is the distribution of Y ?
First, FY (a) = 0 and fY (a) = 0 for all a < 0. For a ≥ 0, we have
FY (a) = P(Y ≤ a)
= 1 − P(Y > a)
= 1 − P(min(X1 , . . . , Xn ) > a).
4
We then notice that min(X1 , . . . , Xn ) > a if and only if for all i = 1, . . . , n, Xi > a.
Hence,
\ n
P(min(X1 , . . . , Xn ) > a) = P {Xi > a}
i=1
n
Y
= P(Xi > a)
i=1
= (e−3a )n = e−3na .
We thus have FY (a) = 1 − e−3na , and we can recover the density by differentiating:
d d
1 − e−3na = 3ne−3na .
fY (a) = FY (a) =
da da
Hence, fY (a) = 3ne−3na 1[0,+∞) (a), which is the density of the exponential distribution
with parameter 3n.
5. Let X be a continuous random variable, with density fX (x). We denote by FX its cumulative
distribution function.
(i) Let Y1 = 3X + 2. Express the cumulative distribution function of Y1 in terms of FX .
What is the density of Y1 , in terms of fX (hint: the density of a r.v. can be recovered as
the derivative of its cumulative distribution function)?
For the cumulative distribution function FY1 of Y1 , we have: for all y ∈ R,
FY1 (y) = P(Y1 ≤ y) = P(3X + 2 ≤ y)
y 2 y 2
=P X≤ − = FX − .
3 3 3 3
As we explained in class, the density fY1 of Y1 can then be recovered as the derivative of
FY1 (y) with respect to y:
d d y 2 1 y 2
fY1 (y) = FY1 (y) = FX − = fX − .
dy dy 3 3 3 3 3
(ii) Let Y2 = 1 − X. Find its cumulative distribution function and its density, in terms of
FX and fX .
This is very similar to the previous question:
FY2 (y) = P(Y2 ≤ y) = P(1 − X ≤ y)
= P(X ≥ 1 − y) = 1 − P(X < 1 − y) = 1 − FX (1 − y)
(we used the fact that X is continuous: where?). Hence,
d d
fY2 (y) = FY2 (y) = 1 − FX (1 − y) = fX (1 − y).
dy dy
5
(iii) Let Y3 = X 2 . Find its cumulative distribution function and its density, in terms of FX
and fX .
We have FY3 (y) = 0 and fY3 (y) = 0 for y ≤ 0. For y > 0,
√ √
FY3 (y) = P(Y3 ≤ y) = P(X 2 ≤ y) = P(− y ≤ X ≤ y)
√ √ √ √
= P(X ≤ y) − P(X ≤ − y) = FX ( y) − FX (− y)
(where did we use the fact that the random variable is continuous?). Hence,
d d √ √ 1 √ √
fY3 (y) = FY3 (y) = FX ( y) − FX (− y) = √ fX ( y) + fX (− y) .
dy dy 2 y
(iv) Assume now that X has standard normal distribution N (0, 1): what is the density of
X 2?
2
We use the result from the previous question with fX (y) = √12π e−y /2 (the density of an
N (0, 1)-distributed r.v.): fX 2 (y) = 0 for y ≤ 0, and for y > 0,
1 1 √ √ 1 1 1
fX 2 (y) = √ · √ fX ( y) + fX (− y) = √ · √ (2e−y/2 ) = √ y −1/2 e−y/2
2π 2 y 2π 2 y 2π
(note that this is a classical distribution: it is known as the γ 1 , 1 distribution).
2 2