Structural
Structural VAR
VAR Part
Part 11
Zahid
Zahid Asghar,
Asghar, Professor,
Professor, School
School of
of Economics,
Economics,
QAU
QAU
03-12-2023
:
Introduction
Del Negro and Schorfheide (2011):
At first glance, VARs appear to be straightforward multivariate generalizations of
univariate autoregressive models. At second sight, they turn out to be one of the
key empirical tools in modern macroeconomics.
:
What are VARs ?
multivariate linear time-series models
endogenous variable in the system are functions of lagged values of all endogenous
variables
simple and flexible alternative to the traditional multiple-equations models
:
Historical Overview : Sims’ Critique
In the 1980s criticized the large-scale macro-econometric models of the time
Proposed VARs as an alternative that allowed one to model macroeconomic data
informatively
:
What Are VARs Used For?
F ORECASTING
Reduced Form VARs
S TRUCUTURAL A NALYSIS
Structural VARs
:
Unit Plan/Roadmap
VAR models roadmap
:
Estimation of VARs
:
Introduction to VARs
Let yt be a vector with the value of n variables at time t:
yt = [y1,t y2,t . . . yn,t ]′
A p-order vector autoregressive process generalize a one-variable AR(p) process to n
variables:
yt = G0 + G1 yt−1 + G2 yt−2 +. . . +Gp yt − p + et Reduced Form VAR
G0 = (n.1) vector of constraints G0 = (n. n)
vector of coe!cients et = (n.1) vector of white
noise innovations
E[et ] = 0
E[et e′τ = Ω, ift = τ
0otherwise
:
Example : A VAR(1) in 2 Variables
y1,t = g11 y1,t + g12 y2,t + e1,t
y2,t = g21 y1,t + g22 y2,t + e2,t
yt = Gt yt−1 + et where
yt = [ ]
y1,t
y2,t
yt = [ ]
πt
gdpt
Gt = [ ]
g11 g12
g21 g22
Gt = [ ]
e1,t
e2,t
, Assumptions about the error terms:
0 0
E[et e′t ] = [ ] = Ω, fort ≠ τ
0 0
2
σe1 σe1e2
E[et et ] = [
′
2
]=Ω
σe1e2 σe2
:
Estimation: by OLS
Performed with OLS applied equation by equation
Estimates are:
consistent
e!cient
equivalent to GLS
:
General Specifications Choices
Selection of variables to be included: in accordance with economic theory, empirical
evidence and/or experience
Exogenous variables can be included: constant, time trends, other additional
explanators
Non-stationary level data is often transformed (log levels, log di"erences, growth
rates, etc.)
The model should be parsimonious
:
Stationary VARs
:
Stationarity of a VAR : Definition
A p-th order VAR is said to be covariance-stationary if :
1. The expected value of yt does not depend on timed
⎡ µ1 ⎤
E[Yt ] = E[Yt+j ] = µ = ⎢
µ2 ⎥
⎢ ⎥
⎢…⎥
⎣µ ⎦
n
2. The covariance matrix of yt and yt+j depends on the time lapsed between j and not
not the reference period t
E[(yt − µ)(yt+j − µ)′ ] = E[(ys − µ)(ys+j − µ)′ ] = Γj
:
Condions for Sationarity
The conditioins for a VAR to be stationary are similar to the conditions for a
univariate AR process to be stationary:
yt = G0 + G1 yt−1 + G2 yt−2 + ⋯ + Gp yt−p + et
(In − G1 L − G2 L2 − ⋯ − Gp Lp )yt = G0 + et
G(L)yt = G0 + et For yt to be stationary, the matrix polynomial in the lag operator
G(L) must be invertible.
:
Conditions for Stationarity
A VAR(p) process is stationary (thus) - A
VAR(p) if all the np roots of the characteristic
polynomial are (in modulus) outside the unit
imaginary
det(In − G1 L − G2 L2 − ⋯ − Gp Lp ) = 0
Softwares sometimes inverse roots of the
characteristic AR polynomial, which should
then lie within the unit imaginary circle.
Unit root circle
:
Vector Moving Average Representation of a VAR
If a VAR is stationary, the yt vector can be expressed as a sum of all of the past white
noise shocks et (VMA(∞) representation yt = µ + G(L)−1 et , where
µ = G(L)−1 G0 yt = µ + (In + Ψ1 L + ΨL2 + …)et
yt = µ + et + Ψe_t − 1 + Ψe_t − 2 + …
yt = µ + ∑ni=1 Ψi et−i
where Ψi is a (n x n) matrix of coe!cients, and Ψ0 is the identity matrix.
From the VMA(∞ ) representation it is possible to obtain impulse response functions
:
Lag Specification Criteria
:
Lags Needed for the VAR
What number is most appropriate? …
If p is extremely short, the model may be poorly specified
If p is extremely long, too many degrees of freedom will be lost
The number of lags should be su!cient for the residuals from the estimation to
constitute individual white noises
:
The Curse of Dimensionality
VAR S ARE VERY DENSELY PARAMETERIZED
In a VAR(p) we have p matrices of dimension nxn : G1 , . . . , Gp
Assume G0 is an intercept vector (dimension: nx1)
The number of total coe!cients/parameters to be estimated as : $n+n.n.p=n(1+np)
:
Overfitting versus Omitted Variable Bias
Over-fitting: poor-quality estimates and bad forecasts
Omitted variable bias: poor-quality estimates and bad forecasts
Possible solutions:
Core VAR plues rotating variables
Bayesian Analysis
:
Lag
Lag Selection
Selection Criteria
Criteria
As for univariate models, one can use multidimensional versions of
AIC, BIC, HQ, etc.
Information-based criteria : trade-o" between goodness of fit (reduction in Sum of
Squares) and parsimony
:
Lag
Lag Specification:
Specification: Practitioner’s
Practitioner’s Advice
Advice
p=4 when working with quarterly data
p=12 with monthly data
The e"ective constraint is np<T/3
Example: T=100, nle7, p=4
:
Forecasting
Forecasting using
using VARs
VARs
:
Forecasting
Forecasting using
using the
the Estimated
Estimated VAR
VAR
Let Yt−1 be a matrix containing all information available up to time t (before
realization of et are known)
Then: E[yt /Yt−1 ] = G0 + G1 yt−1 + G2 yt−2 + ⋯ + Gp yt−p
The forecast error can be decomposed into the sum of et , the unexpected innovation
of yt , and the coe!cient estimation error:
yt − E[yt /Yt−1 ] = et + V (Yt−1 ])
If the estimator of the coe!cients is consistent and estimates, are based on many data
observations, the coe!cient estimation error tends to be small, and :
Yt − E[yt /Yt−1 ] ≅ et
:
Iterated
Iterated Forecasts
Forecasts
Iterating one period forward:
E[yt+1 /Yt−1 ] = G0 + G1 E[yt |yt−1 ] + G2 yt−2 + ⋯ + Gp yt−p+1
Iterating j periods forward:
E[yt+j /Yt−1 ] = G0 + G1 E[yt+j−1 |yt−1 ] + G2 E[yt+j−2 |yt−1 + ⋯ + Gp yt−p+1
: