0% found this document useful (0 votes)
46 views56 pages

Chapter2 Proof Solution

Chapter2 Point Estimation_proof

Uploaded by

Chhin Visal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views56 pages

Chapter2 Proof Solution

Chapter2 Point Estimation_proof

Uploaded by

Chhin Visal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Contents

Chapter2
POINT ESTIMATION

PHOK Ponna and PHAUK Sokkhey

Department of Applied Mathematics and Statistics


Institute of Technology of Cambodia

27/10/2022

1
AMS (ITC) POINT ESTIMATION 27/10/2022 1 / 27
Contents

Contents

1 Introduction

2 Point Estimation

3 The methods of finding point estimators

4 Criteria For Evaluating The Goodness of Estimators

2
AMS (ITC) POINT ESTIMATION 27/10/2022 1 / 27
Introduction

Contents

1 Introduction

2 Point Estimation

3 The methods of finding point estimators

4 Criteria For Evaluating The Goodness of Estimators

3
AMS (ITC) POINT ESTIMATION 27/10/2022 2 / 27
Introduction

One aspect of inferential statistics is estimation, which is the


process of estimating the value of a parameter from information
obtained from a sample.
The statistical procedures for estimating the population mean,
proportion, variance, and standard deviation will be explained in
this chapter.
Inferential statistical techniques have various assumptions that
must be met before valid conclusions can be obtained. One
common assumption is that the samples must be randomly
selected. The other common assumption is that either the sample
size must be greater than or equal to 30 or the population must
be normally or approximately normally distributed if the sample
size is less than 30. The methods are to check the histogram to
see if it is approximately bell-shaped, check for outliers, and if
possible, generate a normal quantile plot and see whether the
points fall close to a straight line. (Note: An area of statistics
called nonparametric statistics does not require the variable to be
normally distributed.)
4
AMS (ITC) POINT ESTIMATION 27/10/2022 3 / 27
Point Estimation

Contents

1 Introduction

2 Point Estimation

3 The methods of finding point estimators

4 Criteria For Evaluating The Goodness of Estimators

5
AMS (ITC) POINT ESTIMATION 27/10/2022 4 / 27
Point Estimation

Definition 1
A point estimate is a specific numerical value estimate of a
parameter. Sample measures (i.e., statistics) are used to estimate
population measures (i.e., parameters). These statistics are called
estimators.

Three Properties of a Good Estimator


1 The estimator should be an unbiased estimator. That is, the
expected value or the mean of the estimates obtained from
samples of a given size is equal to the parameter being estimated.
2 The estimator should be consistent. For a consistent estimator,
as sample size increases, the value of the estimator approaches
the value of the parameter estimated.
3 The estimator should be a relatively efficient estimator. That
is, of all the statistics that can be used to estimate a parameter,
the relatively efficient estimator has the smallest variance.
6
AMS (ITC) POINT ESTIMATION 27/10/2022 5 / 27
Point Estimation

Definition 2
Let X1 , . . . , Xn be a random sample drawn from a population with a
probability density function (pdf) or probability mass function (pmf)
f (x, θ1 , . . . , θm ), where θ1 , ., θm are the parameters. The actual values
of these parameters are not known. The statistics
gi (X1 , . . . , Xn ), i = 1, . . . , m, which can be used to estimate the value
of each of the parameters θi , are called estimators for the parameters,
and the values calculated from these statistics using particular sample
data values are called point estimates of the parameters. Estimators
of θi are denoted by θ̂i , where θ̂i = gi (X1 , . . . , Xn ), i = 1, . . . , m.

Remark 1
The estimators are random variables. When we actually run the
experiment and observe the data, let the observed values of the
random variables X1 , . . . , Xn be x1 , . . . , xn ; then, θ̂(X1 , . . . , Xn ) is an
estimator, and its value θ̂(x1 , . . . , xn ) is an estimate.
7
AMS (ITC) POINT ESTIMATION 27/10/2022 6 / 27
The methods of finding point estimators

Contents

1 Introduction

2 Point Estimation

3 The methods of finding point estimators

4 Criteria For Evaluating The Goodness of Estimators

8
AMS (ITC) POINT ESTIMATION 27/10/2022 7 / 27
The methods of finding point estimators

There are several methods for finding an estimator of θ. Some of these


methods are:
(1) Moment Method
(2) Maximum Likelihood Method
(3) Bayes Method
(4) Least Squares Method
(5) Minimum Chi-Squares Method
(6) Minimum Distance Method
In this chapter, we only discuss the first two methods of estimating a
population parameter.

9
AMS (ITC) POINT ESTIMATION 27/10/2022 8 / 27
The methods of finding point estimators

Definition 3
Let X1 , · · · , Xn be a random sample from a pmf or pdf f (x). For
k = 1, 2, 3, . . ., the kth population moment about 0, or the kth
moment of the distribution f (x), is E (X k ). The kth sample moment
n
about 0 is n1 i=1 Xik .
P

Definition 4
Let X1 , X2 , · · · , Xn be a random sample from a distribution with pmf
or pdf f (x; θ1 , · · · , θm ), where θ1 , · · · , θm are parameters whose values
are unknown. Then the moment estimators are obtained by equating
the first m sample moments to the corresponding first m population
moments and solving for θ1 , · · · , θm .

Example 1
Let X1 , X2 , · · · , Xn represent a random sample of service times of n
customers at a certain facility, where the underlying distribution is
assumed exponential with parameter λ. Find the point estimator of λ.
10
AMS (ITC) POINT ESTIMATION 27/10/2022 9 / 27
11
The methods of finding point estimators

Definition 5
Let f (x1 , . . . , xn ; θ), θ ∈ Θ ⊆ Rm , be the joint pmf or joint pdf of n
random variables X1 , . . . , Xn with sample values x1 , . . . , xn . The
likelihood function of the sample is given by:

L(θ; x1 , . . . , xn ) = f (x1 , x2 , . . . , xn ; θ)[= L(θ), is a briefer notation.]

If X1 , . . . , Xn are discrete iid random variables with pmf p(x, θ),


then the likelihood function is given by:
L(θ) = ni=1 p(xi , θ)
Q

If X1 , . . . , Xn are continuous iid random variables with pdf f (x, θ),


then the likelihood function is given by:
L(θ) = ni=1 f (xi , θ)
Q

12
AMS (ITC) POINT ESTIMATION 27/10/2022 10 / 27
The methods of finding point estimators

Definition 6
Maximum likelihood estimators(MLE) are those values of the
parameters that maximize the likelihood function with respect to the
parameter θ. That is,

L(θ̂; x1 , . . . , xn ) = max L(θ; x1 , . . . , xn )


θ∈Θ

where Θ is the set of possible values of the parameter θ.

Procedure to find the maximum likelihood estimator


1. Define the likelihood function, L(θ).
2. Often it is easier to take the natural logarithm (ln) of L(θ).
3. When applicable, differentiate ln L(θ) with respect to θ, and then
equate the derivative to zero.
4. Solve for the parameter θ, and we will obtain θ̂.
5. Check whether it is a maximizer or a global maximizer.
13
AMS (ITC) POINT ESTIMATION 27/10/2022 11 / 27
The methods of finding point estimators

Example 2
Suppose X1 , X2 , . . . , Xn is a random sample from an exponential
distribution with parameter λ. Find the MLE λ̂ of λ. Is λ̂ unbiased?

Example 3
(a) Let X1 , . . . , Xn be a random sample from a Poisson distribution
with the parameter λ > 0. Find the MLE λ̂ of λ. Is λ̂ unbiased?
(b) Traffic engineers use the Poisson distribution to model light
traffic. This is based on the rationale that when the rate is
approximately constant in light traffic, the distribution of counts
of cars in a given time interval should be Poisson. The following
data show the number of vehicles turning left in 15 randomly
chosen 5-minute intervals at a specific intersection. Calculate the
maximum likelihood estimate.
10 17 12 6 12 11 9 6
10 8 8 16 7 10 6
14
AMS (ITC) POINT ESTIMATION 27/10/2022 12 / 27
15
16
The methods of finding point estimators

Proposition 1 (The Invariance Principle)


Let θ̂1 , θ̂2 , . . . , θ̂m be the MLE’s of the parameters θ1 , θ2 , . . . , θm .
Then the MLE of any one-to-one function h(θ1 , θ2 , . . . , θm ) of these
parameters is the function h(θ̂1 , θ̂2 , . . . , θ̂m ) of the MLE’s.

Example 4
Let X1 , . . . , Xn be a random sample drawn from a normal distribution
N(µ, σ 2 ).
(a) Find the MLE’s of µ and σ 2 .
(b) Find the MLE of σ.

17
AMS (ITC) POINT ESTIMATION 27/10/2022 13 / 27
18
19
The methods of finding point estimators

Proposition 2
Under very general conditions on the joint distribution of the sample,
when the sample size n is large, the maximum likelihood estimator of
any parameter θ is approximately unbiased [E (θ̂) ≈ θ] and has
variance that is either as small as or nearly as small as can be achieved
by any estimator. Stated another way, the MLE θ̂ is approximately the
MVUE of θ.

Remark
Sometimes calculus cannot be used to obtain MLE’s.

Example 5
Suppose my waiting time for a bus is uniformly distributed on [0, θ],
with unknown θ > 0, and the results x1 , · · · , xn of a random sample
from this distribution have been observed. Find the MLE of θ.

20
AMS (ITC) POINT ESTIMATION 27/10/2022 14 / 27
21
The methods of finding point estimators

Definition 7
Let X be an observation from a population with probability density
function f (x; θ). Suppose f (x; θ) is continuous, twice differentiable
and its support does not depend on θ. Then the Fisher information,
I (θ), in a single observation X about θ is given by
 !
∂ ln f (X ; θ) 2

I (θ) = E
∂θ

Lemma 8
The Fisher information contained in a single observation about the
unknown parameter θ can be given alternatively as
 2 

I (θ) = −E ln f (x; θ) .
∂θ2

22
AMS (ITC) POINT ESTIMATION 27/10/2022 15 / 27
The methods of finding point estimators

Example 6
Let X be a single observation taken from a normal population with
unknown mean µ and known variance σ 2 . Find the Fisher information
in a single observation X about µ.

Example 7
Let X1 , X2 , ..., Xn be a random sample from a normal population with
unknown mean µ and known variance σ 2 . Find the Fisher information
in this sample of size n about µ.

Remark 2
If X1 , X2 , ..., Xn is a random sample from a population X ∼ f (x; θ),
then the Fisher information, In (θ), in a sample of size n about the
parameter θ is equal to n times the Fisher information in X about θ.
Thus
In (θ) = nI (θ).
23
AMS (ITC) POINT ESTIMATION 27/10/2022 16 / 27
24
25
Criteria For Evaluating The Goodness of Estimators

Contents

1 Introduction

2 Point Estimation

3 The methods of finding point estimators

4 Criteria For Evaluating The Goodness of Estimators

26
AMS (ITC) POINT ESTIMATION 27/10/2022 17 / 27
Criteria For Evaluating The Goodness of Estimators

Definition 9
An estimator θ̂ of θ is said to be an unbiased estimator of θ if and
only if
E (θ̂) = θ.
If θ̂ is not unbiased, then it is called a biased estimator of θ.

Example 8
Let X1 , X2 , ..., Xn be a random sample from a normal population with
mean µ and variance σ 2 > 0. Is the sample mean X̄ an unbiased
estimator of the parameter µ ?

Example 9
Let X1 , X2 , ..., Xn be a random sample from a normal population with
mean µ and variance σ 2 > 0. What is the maximum likelihood
estimator of σ 2 ? Is this maximum likelihood estimator an unbiased
estimator of the parameter σ 2 ?
27
AMS (ITC) POINT ESTIMATION 27/10/2022 18 / 27
28
29
Criteria For Evaluating The Goodness of Estimators

Proposition 3
If X1 , X2 , . . . , Xn is a random sample
P from a distribution with mean µ,
then the sample average µ̂ = X = ni=1 Xi /n is an unbiased estimator
of µ.

Proposition 4
Let X ∼ Bin(n, p), where n is known and p ∈ (0, 1) is the parameter.
Then the sample proportion p̂ = X /n is an unbiased estimator of p.

Proposition 5
Let X1 , X2 , · · · , Xn be a random sample from a dist. with mean µ and
variance σ 2 . Then the sample variance
Pn 2
i=1 Xi − X
σ̂ 2 = S 2 =
n−1

is unbiased for estimating σ 2 .


30
AMS (ITC) POINT ESTIMATION 27/10/2022 19 / 27
31
Criteria For Evaluating The Goodness of Estimators

Example 10
The accompanying data on flexural strength (MPa) for concrete beams

5.9 7.2 7.3 6.3 8.1 6.8 7.0 7.6 6.8 6.5
7.0 6.3 7.9 9.0 8.2 8.7 7.8 9.7 7.4 7.7
9.7 7.8 7.7 11.6 11.3 11.8 10.7

(a) Calculate a point estimate of the mean value of strength for the
conceptual population of all beams manufactured
P in this fashion,
and state which estimator you used. [Hint: xi =219.8.]
(b) Calculate a point estimate of the strength value that separates
the weakest 50% of all such beams from the strongest 50%, and
state which estimator you used.
(c) Calculate and interpret a point estimate of the population
standard
P 2 deviation s. Which estimator did you use? [Hint:
xi = 1860.94.]

32
AMS (ITC) POINT ESTIMATION 27/10/2022 20 / 27
Criteria For Evaluating The Goodness of Estimators

(d) Calculate a point estimate of the proportion of all such beams


whose flexural strength exceeds 10 MPa.
(e) Calculate a point estimate of the population coefficient of
variation σ/µ , and state which estimator you used.

33
AMS (ITC) POINT ESTIMATION 27/10/2022 21 / 27
Criteria For Evaluating The Goodness of Estimators

Definition 10
Let θ̂1 and θ̂2 be two unbiased estimators of θ. The estimator θ̂1 is
said to be more efficient than θ̂2 if

V (θ̂1 ) < V (θ̂2 ).

The ratio η given by


  V (θ̂ )
2
η θ̂1 , θ̂2 =
V (θ̂1 )
is called the relative efficiency of θ̂1 with respect to θ̂2 .

Example 11
Let X1 , X2 , ..., Xn be a random sample of size n from Exp(θ). Are the
estimators X1 and X̄ unbiased? Given, X1 and X̄ , which one is more
efficient estimator of θ ?

34
AMS (ITC) POINT ESTIMATION 27/10/2022 22 / 27
35
Criteria For Evaluating The Goodness of Estimators

Definition 11
An unbiased estimator θ̂ of θ is said to be a uniform minimum
variance unbiased estimator of θ if and only if

V (θ̂) < V (T̂ )

for any unbiased estimator T̂ of θ.


An unbiased estimator θ̂ of θ is said to be a uniform minimum
variance unbiased estimator of θ if it minimizes the variance
E [(θ̂ − θ)2 ].

Example 12
Let θ̂1 and θ̂2 be unbiased estimators of θ. Suppose that
V (θ̂1 ) = 1, V (θ̂) = 2 and Cov (θ̂1 , θ̂2 ) = 12 . What are the values of c1
and c2 for which c1 θ̂1 + c2 θ̂2 is an unbiased estimator of θ with
minimum variance among unbiased estimators of this type?
36
AMS (ITC) POINT ESTIMATION 27/10/2022 23 / 27
37
38
Criteria For Evaluating The Goodness of Estimators

Theorem 1 (Cramer-Rao lower bound or Fisher information


inequality)
Let X1 , X2 , . . . , Xn be a random sample of size n from a population X
with probability density f (x; θ), where θ is a scalar parameter. Let θ̂
be any unbiased estimator of θ. Suppose the likelihood function L(θ)
is a differentiable function of θ and satisfies
Z ∞ Z ∞
d
··· h (x1 , . . . , xn ) L(θ)dx1 · · · dxn
dθ −∞ −∞
Z ∞ Z ∞
d
= ··· h (x1 , . . . , xn ) L(θ)dx1 · · · dxn
−∞ −∞ dθ

for any h (x1 , . . . , xn ) with E (h (X1 , . . . , Xn )) < ∞. Then

1 1
Var(θ̂) ≥  2  = I (θ)) (CR1)
∂ ln L(θ) n
E ∂θ

39
AMS (ITC) POINT ESTIMATION 27/10/2022 24 / 27
Criteria For Evaluating The Goodness of Estimators

Remark 3
If L(θ) is twice differentiable with respect to θ, the inequality
(CR1) can be stated equivalently as

b ≥ −1 1
Var(θ) h i= (CR2)
∂ 2 ln L(θ) In (θ)
E ∂θ2

The inequalities (CR1) and (CR2) are known as Cramer-Rao


lower bound for the variance of θ̂ or the Fisher information
inequality. The condition (1) interchanges the order on
integration and differentiation. Therefore any distribution whose
range depend on the value of the parameter is not covered by this
theorem. Hence distribution like the uniform distribution may not
be analyzed using the Cramer-Rao lower bound.

40
AMS (ITC) POINT ESTIMATION 27/10/2022 25 / 27
Criteria For Evaluating The Goodness of Estimators

Definition 12
An unbiased estimator θ̂ is called an efficient estimator if it satisfies
Cramer-Rao lower bound, that is
1 1
Var(θ̂) =  2  = I (θ) .
∂ ln L(θ) n
E ∂θ

Example 13
Let X1 , X2 , . . . , Xn be a random sample of size n from a distribution
with density function
( 3
3θx 2 e −θx if 0 < x < ∞
f (x; θ) =
0 otherwise.

What is the Cramér-Rao lower bound for the variance of unbiased


estimator of the parameter θ ?
41
AMS (ITC) POINT ESTIMATION 27/10/2022 26 / 27
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56

You might also like