0% found this document useful (0 votes)
21 views18 pages

CH 05

1) Multiple regression analysis estimators like OLS become consistent as the sample size increases, meaning the estimates converge to the true population parameters. 2) For consistency, the assumptions of zero mean and zero correlation between regressors and errors are weaker than assuming zero conditional mean. 3) Large sample theory allows use of asymptotic distributions like the normal and chi-squared without assuming the strict normality of errors required for smaller samples. OLS remains asymptotically efficient under the Gauss-Markov assumptions.

Uploaded by

xpj25tpfq4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views18 pages

CH 05

1) Multiple regression analysis estimators like OLS become consistent as the sample size increases, meaning the estimates converge to the true population parameters. 2) For consistency, the assumptions of zero mean and zero correlation between regressors and errors are weaker than assuming zero conditional mean. 3) Large sample theory allows use of asymptotic distributions like the normal and chi-squared without assuming the strict normality of errors required for smaller samples. OLS remains asymptotically efficient under the Gauss-Markov assumptions.

Uploaded by

xpj25tpfq4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Multiple Regression Analysis

y = b 0 + b 1 x1 + b 2 x2 + . . . b k xk + u

3. Asymptotic Properties

1
Consistency
Under the Gauss-Markov assumptions OLS
is BLUE, but in other cases it won’t always
be possible to find unbiased estimators
In those cases, we may settle for estimators
that are consistent, meaning as n ® ∞, the
distribution of the estimator collapses to the
parameter value

2
Sampling Distributions as n ­
n3
n1 < n2 < n3

n2

n1
b1
3
Consistency of OLS
Under the Gauss-Markov assumptions, the
OLS estimator is consistent (and unbiased)
Consistency can be proved for the simple
regression case in a manner similar to the
proof of unbiasedness
Will need to take probability limit (plim) to
establish consistency

4
Proving Consistency

bˆ1 = (å (xi1 - x1 )yi ) (å (x - x ) )


i1 1
2

( å (x - x )u ) (n å ( x - x ) )
2
= b1 + n -1
i1 1 i
-1
i1 1

plimbˆ1 = b1 + Cov( x1 , u ) Var ( x1 ) = b1


because Cov( x1 , u ) = 0

5
A Weaker Assumption
For unbiasedness, we assumed a zero
conditional mean – E(u|x1, x2,…,xk) = 0
For consistency, we can have the weaker
assumption of zero mean and zero
correlation – E(u) = 0 and Cov(xj,u) = 0, for
j = 1, 2, …, k
Without this assumption, OLS will be
biased and inconsistent!

6
Deriving the Inconsistency
Just as we could derive the omitted variable bias
earlier, now we want to think about the
inconsistency, or asymptotic bias, in this case

True model : y = b 0 + b1 x1 + b 2 x2 + v
You think : y = b 0 + b1 x1 + u , so that
~
u = b 2 x2 + v and, plimb1 = b1 + b 2d
where d = Cov( x1 , x2 ) Var ( x1 )
7
Asymptotic Bias (cont)
So, thinking about the direction of the
asymptotic bias is just like thinking about
the direction of bias for an omitted variable
Main difference is that asymptotic bias uses
the population variance and covariance,
while bias uses the sample counterparts
Remember, inconsistency is a large sample
problem – it doesn’t go away as one adds
data
8
Large Sample Inference
Recall that under the CLM assumptions,
the sampling distributions are normal, so we
could derive t and F distributions for testing
This exact normality was due to assuming
the population error distribution was normal
This assumption of normal errors implied
that the distribution of y, given the x’s, was
normal as well

9
Large Sample Inference (cont)
Easy to come up with examples for which
this exact normality assumption will fail
Any clearly skewed variable, like wages,
arrests, savings, etc. can’t be normal, since a
normal distribution is symmetric
Normality assumption not needed to
conclude OLS is BLUE, only for inference

10
Central Limit Theorem
Based on the central limit theorem, we can show
that OLS estimators are asymptotically normal
Asymptotic Normality implies that P(Z<z)®F(z)
as n ®¥, or P(Z<z) » F(z)
The central limit theorem states that the
standardized average of any population with mean
µ and variance s2 is asymptotically ~N(0,1), or
Y - µY a
Z= ~ N (0,1)
s
n
11
Asymptotic Normality
Under the Gauss - Markov assumptions,
ˆ ( ) a
(
(i) n b j - b j ~ Normal 0, s a j ,
2 2
)
where a 2
j = plim(n å rˆ )
-1 2
ij

(ii) sˆ is a consistent estimator of s


2 2

( ) ( ) a
(iii) bˆ j - b j se bˆ j ~ Normal(0,1)
12
Asymptotic Normality (cont)
Because the t distribution approaches the
normal distribution for large df, we can also
say that

(bˆ j - b j) ( )
se ˆ ~t
b j
a
n - k -1
Note that while we no longer need to
assume normality with a large sample, we
do still need homoskedasticity
13
Asymptotic Standard Errors
If u is not normally distributed, we sometimes
will refer to the standard error as an asymptotic
standard error, since

( ) sˆ 2
se bˆ j = ,
(
SST j 1 - R 2
j )
( )
se bˆ j » c j n
So, we can expect standard errors to shrink at a
rate proportional to the inverse of √n
14
Lagrange Multiplier statistic
Once we are using large samples and
relying on asymptotic normality for
inference, we can use more that t and F stats
The Lagrange multiplier or LM statistic is
an alternative for testing multiple exclusion
restrictions
Because the LM statistic uses an auxiliary
regression it’s sometimes called an nR2 stat

15
LM Statistic (cont)
Suppose we have a standard model, y = b0 + b1x1
+ b2x2 + . . . bkxk + u and our null hypothesis is
H0: bk-q+1 = 0, ... , bk = 0
First, we just run the restricted model
~ ~ ~
y = b + b x + ... + b x + u~
0 1 1 k -q k -q

Now take the residuals, u~, and regress


u~ on x , x ,..., x (i.e. all the variables)
1 2 k

LM = nRu2 , where Ru2 is from this reg


16
LM Statistic (cont)
a
LM ~ c q2 , so can choose a critical
value, c, from a c q2 distribution, or
just calculate a p - value for c q2

With a large sample, the result from an F test and


from an LM test should be similar
Unlike the F test and t test for one exclusion, the
LM test and F test will not be identical

17
Asymptotic Efficiency
Estimators besides OLS will be consistent
However, under the Gauss-Markov
assumptions, the OLS estimators will have
the smallest asymptotic variances
We say that OLS is asymptotically efficient
Important to remember our assumptions
though, if not homoskedastic, not true

18

You might also like