Tutorial 2 Solutions: Transformations and Moments
Department of Statistical Sciences
Date: March 3, 2025
Problem 1: Lognormal Distribution
Let X ∼ N (µ, σ 2 ) and define Z = eX .
(a) Finding the Density of Z
We use the transformation method. Since
z = ex ⇐⇒ x = ln z,
the derivative is
dx 1
= .
dz z
Thus, the density of Z is given by
(ln z − µ)2 1
dx 1
fZ (z) = fX (ln z) =√ exp − , z > 0.
dz 2πσ 2 2σ 2 z
(b) Finding the Mean and Variance of Z
Recall that for any constant t, the moment generating function (MGF) of a normal variable
is
tX
1 22
E e = exp µt + σ t .
2
Setting t = 1 gives the mean of Z:
X 1 2
E(Z) = E(e ) = exp µ + σ .
2
Similarly, for t = 2 we have
E(e2X ) = exp 2µ + 2σ 2 .
Thus, the variance is
2
Var(Z) = E(e2X ) − E(eX ) = exp 2µ + 2σ 2 − exp 2µ + σ 2 .
This can be factored as
2 2
Var(Z) = exp 2µ + σ exp{σ } − 1 .
1
Problem 2: Moments of the Beta Distribution
Let X ∼ Beta(r, s) with density
Γ(r + s) r−1
fX (x) = x (1 − x)s−1 , 0 < x < 1.
Γ(r)Γ(s)
Mean
The mean is Z 1 Z 1
Γ(r + s)
E(X) = xfX (x) dx = xr (1 − x)s−1 dx.
0 Γ(r)Γ(s) 0
Using the definition of the Beta function,
Z 1
Γ(r + 1)Γ(s)
xr (1 − x)s−1 dx = B(r + 1, s) = ,
0 Γ(r + s + 1)
and noting Γ(r + 1) = rΓ(r), we obtain
Γ(r + s) rΓ(r)Γ(s) r
E(X) = · = .
Γ(r)Γ(s) Γ(r + s + 1) r+s
Variance
Similarly, one can show that
r(r + 1)
E(X 2 ) = .
(r + s)(r + s + 1)
Thus,
2
2 r(r + 1)
2 r rs
Var(X) = E(X ) − (E(X)) = − = .
(r + s)(r + s + 1) r+s (r + s)2 (r+ s + 1)
Problem 3: Transformation of a Gamma Variable
Let X ∼ Gamma(α, λ) with density
λα α−1 −λx
fX (x) = x e , x > 0.
Γ(α)
Define Y = kX, where k > 0.
2
(a) Density of Y
Since Y = kX, we have X = y/k and
dx 1
= .
dy k
Thus,
y 1 λα y α−1 −λ(y/k) 1 λα
fY (y) = fX = e = y α−1 e−λy/k , y > 0.
k k Γ(α) k k Γ(α)k α
(b) Choice of k for a χ2 Distribution
A chi-square distribution with ν degrees of freedom is a special case of the gamma distribu-
tion:
2 ν 1
χν ∼ Γ , .
2 2
To have Y ∼ χ2ν , we require:
ν λ 1
α= and = .
2 k 2
Thus,
k = 2λ.
With this choice, the density of Y becomes
λα 1
fY (y) = α
y α−1 e−y/2 = α y α−1 e−y/2 ,
Γ(α)(2λ) 2 Γ(α)
which is exactly the density of χ22α .
3
Problem 4: Simulation and Transformation
The random variable X has density
5
fX (x) = 1[1,∞) (x).
x6
(a) Simulating X from Uniform Random Numbers
To simulate from X, we use the inverse transform method. First, compute the cumulative
distribution function (CDF): Z x
5
FX (x) = 6
dt.
1 t
Evaluate the integral:
Z
1 t=x 1
t−6 dt = − t−5 FX (x) = −t−5 t=1 = 1 − 5 ,
=⇒ x ≥ 1.
5 x
Let U ∼ U (0, 1) and set
1
U = FX (x) = 1 − .
x5
Solving for x, we get:
1 1
=1−U =⇒ x= .
x5 (1 − U )1/5
Thus, to simulate a realization of X, generate U ∼ U (0, 1) and compute
1
X= .
(1 − U )1/5
(b) Showing Y = ln X is Exponential
Let Y = ln X. Then X = eY and the transformation formula gives:
fY (y) = fX (ey ) ey .
Substitute x = ey into fX (x):
5 y
fY (y) = e = 5e−5y , y ≥ 0.
(ey )6
This is the density of an exponential distribution with parameter 5.
4
Problem 5: Linear Transformation of a Uniform Ran-
dom Variable
Let X ∼ U (a, b) with density
1
fX (x) = , x ∈ [a, b].
b−a
Consider the transformation
Y = cX + d, with c ̸= 0.
If c > 0, then Y takes values in [ca + d, cb + d]. The transformation yields
y − d 1 1
fY (y) = fX = , y ∈ [ca + d, cb + d].
c |c| (b − a)|c|
Thus, Y is uniformly distributed on the interval [ca + d, cb + d].
Problem 6: Expectation via Tail Probabilities
Let X be a non-negative discrete random variable taking values in N = {0, 1, 2, . . .}. We
want to show ∞
X
E(X) = P (X > n).
n=0
Proof: Write X as a sum of indicators:
∞
X
X= 1{X>n} .
n=0
Taking expectations and using linearity,
∞
! ∞ ∞
X X X
E(X) = E 1{X>n} = E 1{X>n} = P (X > n).
n=0 n=0 n=0
5
Problem 7: Gamma Function and the Normal Density
√
(a) Showing Γ(1/2) = π
Recall the definition of the Gamma function:
Z ∞
1
Γ = t−1/2 e−t dt.
2 0
Make the substitution t = x2 , so that dt = 2x dx and when t = 0, x = 0, and as t → ∞,
x → ∞. Then: Z ∞ Z ∞
1 2 −1/2 −x2 2
Γ = (x ) e 2x dx = 2 e−x dx.
2 0 0
It is known that ∞ ∞ √
√
Z Z
−x2 −x2 π
e dx = π =⇒ e dx = .
−∞ 0 2
Thus, √
π √
1
Γ =2· = π.
2 2
(b) Verifying the Normal Density Integrates to 1
The pdf of a normal random variable with mean µ and variance σ 2 is
(x − µ)2
1
f (x) = √ exp − .
2πσ 2σ 2
x−µ
Making the change of variable z = (so that dz = dx
σ σ
), we have
Z ∞ Z ∞ 2
1 z
f (x) dx = √ exp − dz = 1,
−∞ 2π −∞ 2
since the standard normal density integrates to 1.
6
Problem 8: Calculating E(|X|) for X ∼ N (0, 1) in Three
Ways
q
2
Let X ∼ N (0, 1) and define Y = |X|. We want to show that E(Y ) = π
.
(a) Using the Survival Function (Tail Integration)
For a nonnegative random variable,
Z ∞
E(Y ) = P (Y > y) dy.
0
Since Y = |X|, for y ≥ 0 we have
P (Y > y) = P (|X| > y) = 2P (X > y) = 2 1 − Φ(y) ,
where Φ(y) is the standard normal CDF. Thus,
Z ∞
E(Y ) = 2 1 − Φ(y) dy.
0
A standard result (or evaluating the integral via integration by parts) shows that
Z ∞
1
(1 − Φ(y)) dy = √ ,
0 2π
so that r
1 2
E(Y ) = 2 · √ = .
2π π
(b) Using the Density of Y
Since X ∼ N (0, 1), the density of Y = |X| is
2 2
fY (y) = 2ϕ(y) = √ e−y /2 , y ≥ 0,
2π
where ϕ(y) is the standard normal density. Then,
Z ∞ Z ∞
2 2
E(Y ) = y fY (y) dy = √ y e−y /2 dy.
0 2π 0
y2
Let u = 2
so that du = y dy. The integral becomes
Z ∞
e−u du = 1.
0
Thus, r
2 2
E(Y ) = √ = .
2π π
7
(c) Direct Integration with the Transformation g(x) = |x|
Here, Z ∞
E(Y ) = E(|X|) = |x| ϕ(x) dx.
−∞
Since the integrand is even,
Z ∞ Z ∞
2 2 /2
E(|X|) = 2 x ϕ(x) dx = √ x e−x dx.
0 2π 0
2
As in part (b), with the substitution u = x2 , we find
Z ∞
e−u du = 1,
0
yielding r
2
E(|X|) = .
π
8
Problem 9: Relationship Between the Beta and Gamma
Functions
(a) Expressing Γ(a)Γ(b) as a Double Integral
By definition, Z ∞ Z ∞
a−1 −t
Γ(a) = t e dt, Γ(b) = sb−1 e−s ds.
0 0
Multiplying these, Z ∞ Z ∞
Γ(a)Γ(b) = ta−1 sb−1 e−(t+s) ds dt.
0 0
(b) Change of Variables
Let
t = xy, s = x(1 − y),
with x ∈ (0, ∞) and y ∈ (0, 1). The Jacobian of this transformation is
∂(t, s)
= x.
∂(x, y)
Then, the double integral becomes
Z 1Z ∞
Γ(a)Γ(b) = (xy)a−1 (x(1 − y))b−1 e−x x dx dy.
0 0
Simplify: Z 1 Z ∞
Γ(a)Γ(b) = y a−1
(1 − y) b−1
dy xa+b−1 e−x dx.
0 0
Recognize that Z ∞
xa+b−1 e−x dx = Γ(a + b),
0
and Z 1
y a−1 (1 − y)b−1 dy = B(a, b).
0
Thus,
Γ(a)Γ(b)
Γ(a)Γ(b) = Γ(a + b)B(a, b) =⇒ B(a, b) = .
Γ(a + b)
9
Problem 10: Continuous Mixtures
Let f (x|λ) be an exponential density with parameter λ > 0, and suppose λ is random with
density g(λ). Define Z ∞
h(x) = f (x|λ)g(λ) dλ.
0
R∞
Since for each fixed λ, f (x|λ) is a density in x, we have f (x|λ) ≥ 0 and −∞ f (x|λ) dx = 1.
Also, g(λ) is nonnegative and integrates to 1. Interchanging the order of integration (by
Fubini’s theorem) gives
Z ∞ Z ∞ Z ∞ Z ∞
h(x) dx = g(λ) f (x|λ) dx dλ = g(λ) dλ = 1.
−∞ 0 −∞ 0
Thus, h(x) is a valid density.
The same argument holds if we replace the exponential density by any density f (x|λ);
the mixture Z
h(x) = f (x|λ)g(λ) dλ
remains a density.
10
Problem 11: Minimizing the Mean Squared Error
Let X ∈ L2 and define
h(a) = E (X − a)2 ,
a ∈ R.
Expanding the square, we have
h(a) = E (X−E(X)+E(X)−a)2 = E (X−E(X))2 +2(E(X)−a)E X−E(X) +(E(X)−a)2 .
Since E X − E(X) = 0, this simplifies to
h(a) = Var(X) + (E(X) − a)2 .
Clearly, h(a) is minimized when (E(X) − a)2 is minimized, i.e., when a = E(X), and the
minimum value is
h(E(X)) = Var(X).
11
Problem 12: Moments of a Normal Distribution
2
Let X ∼ N (µX , σX ) and Y ∼ N (0, 1).
(a) Standardizing X
Define
X − µX
Z= .
σX
Since linear transformations of normal variables are normal, it follows that
Z ∼ N (0, 1).
(b) Moments of a Standard Normal Variable
For a standard normal variable, the odd moments are zero (by symmetry) and the even
moments are given by:
r! , r = 0, 2, 4, . . . ,
r
E(Y r ) = 2r/2 2 !
0, r = 1, 3, 5, . . . .
This result is often derived using the moment generating function of Y or via integration in
polar coordinates.
(c) Moments of (X − µX )
Since
X − µX = σX Z,
we have for any integer r ≥ 0,
r
σ X r!
r
, r even,
r r
E(Z r ) 2r/2 !
E (X − µX ) = σX = 2
0, r odd.
(d) Skewness and Kurtosis of X
The skewness of X is defined as
E (X − µX )3
Skewness = 3
.
σX
Since E((X − µX )3 ) = 0, the skewness is 0.
12
The kurtosis (more precisely, the excess kurtosis) is given by
E (X − µX )4
Kurtosis = 4
.
σX
Using the formula from part (c) for r = 4, we have
4! 4 24
4
E (X − µX )4 = σX 4
2
= σX = σX · 3.
2 2! 4·2
Thus,
E (X − µX )4
4
= 3.
σX
Both skewness and kurtosis do not depend on µX or σX , which confirms that they are
properties of the standard normal distribution.
13