0% found this document useful (0 votes)
51 views25 pages

Chapter 6 Distributions Derived From The Normal Distribution

The document discusses several key probability distributions derived from the normal distribution, including the chi-squared, t, and F distributions. It provides definitions and properties of these distributions, including their probability density functions and how they are related to the normal distribution. For example, it states that a chi-squared distribution with 1 degree of freedom is the distribution of the square of a standard normal random variable. It also discusses the gamma, beta, and t distributions and how they are defined in terms of other distributions.

Uploaded by

Smriti Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views25 pages

Chapter 6 Distributions Derived From The Normal Distribution

The document discusses several key probability distributions derived from the normal distribution, including the chi-squared, t, and F distributions. It provides definitions and properties of these distributions, including their probability density functions and how they are related to the normal distribution. For example, it states that a chi-squared distribution with 1 degree of freedom is the distribution of the square of a standard normal random variable. It also discusses the gamma, beta, and t distributions and how they are defined in terms of other distributions.

Uploaded by

Smriti Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Chapter 6 Distributions Derived from the Normal Distribution

6.2
2
, t, F Distribution (and gamma, beta)
Normal Distribution
Consider the integral
I =
_

e
y
2
/2
dy
To evaluate the intgral, note that I > 0 and
I
2
=
_

exp
_
_
_
y
2
+ z
2
2
_
_
_ dydz
This integral can be easily evaluated by changing to polar coordinates. y = rsin()
and z = rcos(). Then
1
I
2
=
_
2
0
_

0
e
r
2
/2
rdrd
=
_
2
0
_
e
r
2
/2
|

0
_
d
=
_
2
0
d = 2
This implies that I =

2 and
_

2
e
y
2
/2
dy = 1
2
If we introduce a new variable of integration
y =
x a
b
where b > 0, the integral becomes
_

1
b

2
exp
_

_
(x a)
2
2b
2
_

_ dx = 1
This implies that
f(x) =
1
b

2
exp
_

_
(x a)
2
2b
2
_

_
for x (, ) satises the conditions of being a pdf. A random variable of the
continuous type with a pdf of this form is said to have a normal distribution.
3
Lets nd the mgf of a normal distribution.
M(t) =
_

e
tx
1
b

2
exp
_

_
(x a)
2
2b
2
_

_ dx
=
_

1
b

2
exp
_
_
_
2b
2
tx + x
2
2ax + a
2
2b
2
_
_
_ dx
= exp
_

_
a
2
(a + b
2
t)
2
2b
2
_

_
_

1
b

2
exp
_

_
(x a b
2
t)
2
2b
2
_

_ dx
= exp
_
_
_at +
b
2
t
2
2
_
_
_
Note that the exponential form of the mgf allows for simple derivatives
M

(t) = M(t)(a + b
2
t)
4
and
M

(t) = M(t)(a + b
2
t)
2
+ b
2
M(t)
= M

(0) = a

2
= M

(0)
2
= a
2
+ b
2
a
2
= b
2
Using these facts, we write the pdf of the normal distribution in its usual form
f(x) =
1

2
exp
_

_
(x )
2
2
2
_

_
for x (, ). Also, we write the mgf as
M(t) = exp
_
_
_t +

2
t
2
2
_
_
_
5
Theorem If the random variable X is N(,
2
),
2
> 0, then the random variable
W = (X )/ is N(0, 1).
Proof:
F(w) = P[
X

w] = P[X w + ]
=
_
w+

2
exp
_

_
(x )
2
2
2
_

_ dx.
If we change variables letting y = (x )/ we have
F(w) =
_
w

2
e
y
2
/2
dy
Thus, the pdf f(w) = F

(w) is just
f(w) =
1

2
e
w
2
/2
for < w < , which shows that W is N(0, 1).
6
Recall, the gamma function is dened by
() =
_

0
y
1
e
y
dy
for > 0.
If = 1,
(1) =
_

0
e
y
dy = 1
If > 1, integration by parts can be used to show that
(a) = ( 1)
_

0
y
2
e
y
dy = ( 1)( 1)
By iterating this, we see that when is a positive integer () = ( 1)!.
7
In the integral dening () lets have a change of variables y = x/ for some > 0.
Then
() =
_

0
_
_
x

_
_
1
e
x/
_
_
1

_
_
dx
Then, we see that
1 =
_

0
1
()

x
1
e
x/
dx
When > 0, > 0 we have
f(x) =
1
()

x
1
e
x/
is a pdf for a continuous random variable with space (0, ). A random variable with
a pdf of this form is said to have a gamma distribution with parameters and
.
8
Recall, we can nd the mgf of a gamma distribution.
M(t) =
_

0
e
tx
()

x
1
e
x/
dx
Set y = x(1 t)/ for t < 1/. Then
M(t) =
_

0
/(1 t)
()

_
_
y
1 t
_
_
1
e
y
dy
=
_
_
1
1 t
_
_

_

0
1
()
y
1
e
y
dy
=
1
(1 t)

for t <
1

.
9
M

(t) = (1 t)
1
M

(t) = ( + 1)
2
(1 t)
2
So, we can nd the mean and variance by
= M

(0) =
and

2
= M

(0)
2
=
2
10
An important special case is when = r/2 where r is a positive integer, and = 2.
A random variable X with pdf
f(x) =
1
(r/2)2
r/2
x
r/21
e
x/2
for x > 0 is said to have a chi-square distribution with r degrees of freedom.
The mgf for this distribution is
M(t) = (1 2t)
r/2
for t < 1/2.
11
Example: Let X have the pdf
f(x) = 1
for 0 < x < 1. Let Y = 2ln(X). Then x = g
1
(y) = e
y/2
.
The space A is {x : 0 < x < 1}, which the one-to-one transformation y = 2ln(x)
maps onto B.
B= {y : 0 < y < }.
The Jacobian of the transformation is
J =
1
2
e
y/2
Accordingly, the pdf of Y is
12
f(y) = f(e
y/2
)|J| =
1
2
e
y/2
for 0 < y < .
Recall the pdf of a chi-square distribution with r degress of freedom.
f(x) =
1
(r/2)2
r/2
x
r/21
e
x/2
From this we see that f(x) = f(y) when r = 2.
Denition (Book) If Z is a standard normal random variable, the distribution of
U = Z
2
is called a chi-square distribution with 1 degree of freedom.
Theorem If the random variable X is N(,
2
), then the random variable V =
(X )
2
/
2
is
2
(1).
13
Beta Distribution
Let X
1
and X
2
be independent gamma variables with joint pdf
h(x
1
, x
2
) =
1
()()
x
1
1
x
1
2
e
x
1
x
2
for 0 < x
1
< and 0 < x
2
< , where > 0, > 0.
Let Y
1
= X
1
+ X
2
and Y
2
=
X
1
X
1
+X
2
.
y
1
= g
1
(x
1
, x
2
) = x
1
+ x
2
y
2
= g
2
(x
1
, x
2
) =
x
1
x
1
+ x
2
x
1
= h
1
(y
1
, y
2
) = y
1
y
2
x
2
= h
2
(y
1
, y
2
) = y
1
(1 y
2
)
14
J =

y
2
y
1
(1 y
2
) y
1

= y
1
The transformation is one-to-one and maps A, the rst quadrant of the x
1
x
2
plane
onto
B={(y
1
, y
2
) : 0 < y
1
< , 0 < y
2
< 1}.
The joint pdf of Y
1
, Y
2
is
f(y
1
, y
2
) =
y
1
()()
(y
1
y
2
)
1
[y
1
(1 y
2
)]
1
e
y
1
=
y
1
2
(1 y
2
)
1
()()
y
+1
1
e
y
1
for (y
1
, y
2
) B.
Because B is a rectangular region and because g(y
1
, y
2
) can be factored into a function
of y
1
and a function of y
2
, it follows that Y
1
and Y
2
are statistically independent.
15
The marginal pdf of Y
2
is
f
Y
2
(y
2
) =
y
1
2
(1 y
2
)
1
()()
_

0
y
+1
1
e
y
1
dy
1
=
( + )
()()
y
1
2
(1 y
2
)
1
for 0 < y
2
< 1.
This is the pdf of a beta distribution with parameters and .
Also, since f(y
1
, y
2
) = f
Y
1
(y
1
)f
Y
2
(y
2
) we see that
f
Y
1
(y
1
) =
1
( + )
y
+1
1
e
y
1
for 0 < y
1
< .
Thus, we see that Y
1
has a gamma distribution with parameter values + and 1.
16
To nd the mean and variance of the beta distribution, it is helpful to notice that
from the pdf, it is clear that for all > 0 and > 0,
_
1
0
y
1
(1 y)
1
dy =
()()
( + )
The expected value of a random variable with a beta distribution is
_
1
0
yg(y)dy =
( + )
()()
_
1
0
y

(1 y)
1
dy
=
( + 1)()
( + 1 + )

( + )
()()
=

+
This follows from applying the fact that
( + 1) = ()
17
To nd the variance, we apply the same idea to nd E[Y
2
] and use the fact that
var(Y ) = E[Y
2
]
2
.

2
=

( + + 1)( + )
2
18
t distribution
Let W and V be independent random variables for which W is N(0, 1) and V is

2
(r).
f(w, v) =
1

2
e
w
2
/2
1
(r/2)2
r/2
v
r/21
e
r/2
for < w < , 0 < v < .
Dene a new random variable T by
T =
W
_
V/r
To nd the pdf f
T
(t) we use the change of variables technique with transformations
t =
w

v/r
and u = v.
19
These dene a one-to-one transformation that maps
A={(w, v) : < w < , 0 < v < } to
B={(t, u) : < t < , 0 < u < }.
The inverse transformations are
w =
t

r
and v = u.
Thus, it is easy to see that
|J| =

u/

r
20
By applying the change of variable technique, we see that the joint pdf of T and U
is
f
TU
(t, u) = f
WV
(
t

r
, u)|J|
=
u
r/21

2(r/2)2
r/2
exp
_

u
2
(1 + t
2
/r)
_

r
for < t < , 0 < u < .
To nd the marginal pdf of T we compute
f
T
(t) =
_

0
f(t, u)du
=
_

0
u
(r+1)/21

2r(r/2)2
r/2
exp
_

u
2
(1 + t
2
/r)
_
du
This simplies with a change of variables z = u[1 + (t
2
/r)]/2.
21
f
T
(t) =
_

0
1

2r(r/2)2
r/2
_
_
_
2z
1 + t
2
/r
_
_
_
(r+1)/21
e
z
_
_
_
2
1 + t
2
/r
_
_
_ dz
=
[(r + 1)/2]

r(r/2)(1 + t
2
/2)
(r+1)/2
for < t < .
A random variable with this pdf is said to have a t distribution with r degrees
of freedom.
22
F Distribution
Let U and V be independent chi-square random variables with r
1
and r
2
degrees of
freedom, respectively.
f(u, v) =
u
r
1
/21
v
r
2
/21
e
(u+v)/2
(r
1
/2)(r
2
/2)2
(r
1
+r
2
)/2
Dene a new random variable
W =
U/r
1
V/r
2
To nd f
W
(w) we consider the transformation
w =
u/r
1
v/r
2
and z = v.
This maps
A={(u, v) : 0 < u < , 0 < v < } to
B={(w, z) : 0 < w < , 0 < z < }.
23
The inverse transformations are
u = (r
1
/r
2
)zw and v = z.
This results in
|J| = (r
1
/r
2
)z
The joint pdf of W and Z by the change of variables technique is
f(w, z) =
_
r
1
zw
r
2
_
r
1
/21
z
r
2
/21
(r
1
/2)(r
2
/2)2
(r
1
+r
2
)/2
exp
_
_

z
2
_
_
r
1
w
r
2
+ 1
_
_
_
_
r
1
z
r
2
for (w, z) B.
The marginal pdf of W is
f
W
(w) =
_

0
f(w, z)dz
24
=
_

0
(r
1
/r
2
)
r
1
/2
(w)
r
1
/21
z
r
1
+r
2
/21
(r
1
/2)(r
2
/2)2
(r
1
+r
2
)/2
exp
_
_

z
2
_
_
r
1
w
r
2
+ 1
_
_
_
_
dz
We simplify this by changing the variable of integration to
y =
z
2
_
_
r
1
w
r
2
+ 1
_
_
Then the pdf f
W
(w) is
_

0
(r
1
/r
2
)
r
1
/2
(w)
r
1
/21
(r
1
/2)(r
2
/2)2
(r
1
+r
2
)/2
_
_
_
2y
r
1
w/r
2
+ 1
_
_
_
(r
1
+r
2
)/21
e
y
_
_
_
2
r
1
w/r
2
+ 1
_
_
_ dy
=
[(r
1
+ r
2
)/2](r
1
/r
2
)
r
1
/2
(w)
r
1
/21
(r
1
/2)(r
2
/2)(1 + r
1
w/r
2
)
(r
1
+r
2
)/2
for 0 < w < .
A random variable with a pdf of this form is said to have an F-distribution with
numerator degrees of freedom r
1
and denominator degrees of freedom
r
2
.
25

You might also like