ECONOMIC ANALYSIS OF BUSINESS
AUTOCORRELATION
1. What is Autocorrelation
2. What Causes Autocorrelation
3. Consequences of Autocorrelation
4. Detecting and Resolving Autocorrelation
2
What is Autocorrelation
One of the important assumption is that the
covariances and correlations between
different disturbances are all zero:
cov(ut, us)=0 for all t≠s
This assumption states that the disturbances
ut and us are independently distributed,
which is called serial independence.
What is Autocorrelation
If this assumption is no longer valid, then the
disturbances are not pairwise independent, but
pairwise autocorrelated (or Serially Correlated).
This means that an error occurring at period t
may be carried over to the next period t+1.
Autocorrelation is most likely to occur in time
series data.
What Causes Autocorrelation
One factor that can cause autocorrelation is omitted variables.
Suppose Yt is related to X2t and X3t, but we wrongfully do not
include X3t in our model.
The effect of X3t will be captured by the disturbances ut.
If X3t like many economic series exhibit a trend over time,
then X3t depends on X3t-1, X3t -2 and so on.
Similarly then ut depends on ut-1, ut-2 and so on.
What Causes Autocorrelation
Another possible reason is misspecification.
Suppose Yt is related to X2t with a quadratic
relationship:
Yt=β1+β2X22t+ut
but we wrongfully assume and estimate a straight line:
Yt=β1+β2X2t+ut
Then the error term obtained from the straight line will
depend on X22t.
First-Order Autocorrelation
The simplest and most commonly observed is the
first-order autocorrelation.
Consider the multiple regression model:
Yt=β1+β2X2t+β3X3t+β4X4t+…+βkXkt+ut
in which the current observation of the error term ut is
a function of the previous (lagged) observation of the
error term:
ut=ρut-1+et
First-Order Autocorrelation
The coefficient ρ is called the first-order autocorrelation
coefficient and takes values from -1 to +1.
It is obvious that the size of ρ will determine the strength
of serial correlation.
We can have three different cases.
First-Order Autocorrelation
(a) If ρ is zero, then we have no autocorrelation.
(b) If ρ approaches unity, the value of the previous
observation of the error becomes more important
in determining the value of the current error and
therefore high degree of autocorrelation exists. In
this case we have positive autocorrelation.
(c) If ρ approaches -1, we have high degree of
negative autocorrelation.
Higher-Order Autocorrelation
Second-order when:
ut=ρ1ut-1+ ρ2ut-2+et
Third-order when
ut=ρ1ut-1+ ρ2ut-2+ρ3ut-3 +et
p-th order when:
ut=ρ1ut-1+ ρ2ut-2+ρ3ut-3 +…+ ρput-p +et
Consequences of Autocorrelation
The estimated variances of the regression coefficients
will be biased and inconsistent, and therefore hypothesis
testing is no longer valid. In most of the cases, the R2
will be overestimated and the t-statistics will tend to be
higher.
Detecting Autocorrelation
The Durbin Watson (d w) Test
d w statistic (d) = 2(1- ρ)
• The Durbin-Watson statistic will always have a value between 0 and 4. A
value of 2.0 means that there is no autocorrelation detected in the sample.
Values from 0 to less than 2 indicate positive autocorrelation and values
from 2 to 4 indicate negative autocorrelation.
• A stock price displaying positive autocorrelation would indicate that the price
yesterday has a positive correlation on the price today—so if the stock fell
yesterday, it is also likely that it falls today. A security that has a negative
autocorrelation, on the other hand, has a negative influence on itself over
time—so that if it fell yesterday, there is a greater likelihood it will rise today.