Heteroskedasticity and Autocorrelation Consistent Standard ...Lecture 9 ‐ 13, July 21, 2008 The...
Transcript of Heteroskedasticity and Autocorrelation Consistent Standard ...Lecture 9 ‐ 13, July 21, 2008 The...
Lecture 9 ‐ 1, July 21, 2008
NBER Summer Institute Minicourse –
What’s New in Econometrics: Time Series
Lecture 9
July 16, 2008
Heteroskedasticity and Autocorrelation Consistent Standard Errors
Lecture 9 ‐ 2, July 21, 2008
Outline 1. What are HAC SE’s and why are they needed? 2. Parametric and Nonparametric Estimators 3. Some Estimation Issues (psd, lag choice, etc.)
4. Inconsistent Estimators
Lecture 9 ‐ 3, July 21, 2008
1. What are HAC SE’s and why are they needed?
Linear Regression: yt = xt′β + et
1 1
1 1 1 1
ˆ ' ' ' 'T T T T
t t t t t t t tt t t t
x x x y x x x eβ β− −
= = = =
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞= = +⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠∑ ∑ ∑ ∑
1
1
1 1
1 1ˆ( ) ' 'T T
t t t t XX Tt t
T x x x e S AT T
β β−
−
= =
⎛ ⎞ ⎛ ⎞− = =⎜ ⎟ ⎜ ⎟
⎝ ⎠ ⎝ ⎠∑ ∑ .
SXX p
→ ΣXX = E(xtxt′), AT (0, )d
N→ Ω , where Ω = limT→∞ Var1
1 'T
t tt
x eT =
⎛ ⎞⎜ ⎟⎝ ⎠
∑
Thus 1ˆ ,a
N VT
β β⎛ ⎞⎜ ⎟⎝ ⎠
∼ , where 1 1ˆ ˆXX XXV S S− −= Ω
HAC problem : Ω̂ = ???
Lecture 9 ‐ 4, July 21, 2008
(Same problem arises in IV regression, GMM, … ) Notation:
Ω = limT→∞ Var1
1 'T
t tt
x eT =
⎛ ⎞⎜ ⎟⎝ ⎠
∑
Let wt = xtet (assume this is a scalar for convenience)
Feasible estimator of Ω must use ˆ ˆt t tw x e= ,
The estimator is { }( )ˆ ˆ twΩ , but most of our discussion uses { }( )ˆ
twΩ for convenience.
Lecture 9 ‐ 5, July 21, 2008
An expression for Ω: Suppose wt is covariance stationary with autocovariances γj. (Also, for notational convenience, assume wt is a scalar). Then
1 21
0 1 1 2 2 1 1
1 1
1 1
1 1( ) ( ... )
1 { ( 1)( ) ( 2)( ) 1 ( )}
1 ( )
T
t Tt
T T
T T
j j jj T j
var w var w w wTT
T T TT
jT
γ γ γ γ γ γ γ
γ γ γ
=
− − − −
− −
−=− + =
Ω = = + + +
= + − + + − + + ... × +
= − +
∑
∑ ∑
If the autocovariances are “1-summable” so that jj γ| |< ∞∑ then
Ω = 1
1( )T
t jt j
var wT
γ∞
= =−∞
→∑ ∑
Recall spectrum of w at frequency ω is S(ω) = 12
i jj
je ωγ
π
∞−
=−∞∑ , so that Ω =
2π×S(0). Ω is called the “Long-run” variance of w.
Lecture 9 ‐ 6, July 21, 2008
II. Estimators (a) Parametric Estimators, wt ~ ARMA φ(L)wt = θ(L)εt,
S(ω) = 21 ( ) ( )2 ( ) ( )
i i
i i
e ee e
ω ω
ε ω ω
θ θσπ φ φ
−
−
Ω = 2π×S(0) = 22
1 22 22 2
1 2
(1 )(1)(1) (1 )
q
pε ε
θ θ θθσ σφ φ φ φ
− − − −=
− − − −
2
1 222
1 2
ˆ ˆ ˆ(1 )ˆ ˆ ˆ ˆ ˆ(1 )q
pε
θ θ θσ
φ φ φ− − − −
Ω =− − − −
Lecture 9 ‐ 7, July 21, 2008
Jargon: VAR-HAC is a version of this. Suppose wt is a vector and the estimated VAR using wt is
1 1ˆ ˆ ˆ...t t p t p tw w w ε− −= Φ + +Φ + ,
where 1
1ˆ ˆ ˆ 'T
t ttTε ε ε=
Σ = ∑
Then 1 1
1 1ˆ ˆ ˆ ˆ ˆ ˆ( ... ) ( ... ) 'p pI Iε
− −Ω = −Φ − −Φ Σ −Φ − −Φ
Lecture 9 ‐ 8, July 21, 2008
(b) Nonparametric Estimators
jj
γ∞
=−∞
Ω = ∑
ˆ ˆ
m
j jj m
K γ=−
Ω = ∑
with 1
1ˆ ˆT
j t t j jt j
w wT
γ γ− −= +
= =∑ (j ≥ 0)
Lecture 9 ‐ 9, July 21, 2008
III. Issues (a) Ω̂ ≥ 0 ? (b) Number of lags (m) in non-parameter estimator or order of ARMA in parametric estimator (c) form of Kj weights
Lecture 9 ‐ 10, July 21, 2008
(a) Ω̂ ≥ 0 ?
21 22
21 2
ˆ ˆ ˆ(1 )ˆ ˆ ˆ ˆ ˆ(1 )q
pε
θ θ θσ
φ φ φ− − − −
Ω =− − − −
(yes)
ˆ ˆ
m
j jj m
K γ=−
Ω = ∑ (Not necessarily)
MA(1) Example: Ω = (γ−1 + γ0 + γ1) and 1 0 1
ˆ ˆ ˆ ˆ( )γ γ γ−Ω = + + |γ1| ≤ γ0/2 … but 1 0ˆ ˆ| | / 2γ γ> is possible. Newey-West (1987): Use Kj = | |1
1j
m−
+ (Bartlett “Kernel”)
Lecture 9 ‐ 11, July 21, 2008
An alternative expression for ˆ ˆm
j jj m
K γ=−
Ω = ∑
Let W = (w1 w2 … wT)′ where W ~ (0, ΣWW), U = HW, U ~ (0, ΣUU) with ΣUU= HΣWWH′. Choose H so that ΣUU = D a diagonal matrix.
A useful result: with W covariance stationary (ΣWW is “special”), a particular H matrix yields (approximately) ΣUU = D (diagonal). (The u’s are the coefficients from the discrete Fourier transform of the w’s.)
Variable 2π×Variance (Dii) u1 S(0)
u2 and u3 S(1×2π/T) u4 and u5 S(2×2π/T) u6 and u7 S(3×2π/T)
… … uT−1 and uT (T odd, similar for T even) S([(T−1)/2]×2π/T)
Lecture 9 ‐ 12, July 21, 2008
Some algebra shows that
ˆ ˆm
j jj m
K γ=−
Ω = ∑ = 2 2( 1)/22 2 12
0 11 2
Tj j
jj
u uk u k
−+
=
⎛ ⎞++ ⎜ ⎟⎜ ⎟
⎝ ⎠∑
where the k-weights are functions (Fourier transforms) of the K-weights. The algebraic details are unimportant for our purposes – but, for those interested, they are sketched on the next slide. Jargon: the term
2 22 2 1
2j ju u +⎛ ⎞+
⎜ ⎟⎜ ⎟⎝ ⎠
is called the j’th periodogram ordinates. Because
2 22 2 1
2( ) ( ) 2j jjE u E u S
Tππ+
⎛ ⎞= = ⎜ ⎟⎝ ⎠
, the j’th periodogram ordinate is an estimate
of the spectrum at frequency (2πj/T). The important point: Evidently, Ω̂ ≥ 0 requires kj ≥ 0 for all j.
Lecture 9 ‐ 13, July 21, 2008
The algebra linking the data w, the u’s, the periodogram ordinates and the sample covariances parallels the expressions the spectrum presented in lecture 1. (Ref: Fuller (1976), or Priestly (1981), or Brockwell and Davis (1991)).
In particular, let cj = 1
12
jT
i tt
tw e
Tω
=∑ where ωj = 2πj/T
Then 2 2 12 2 1 2
1
ˆ| | cos( )2
Tj j
j p jp T
u uc pγ ω
−+
=− −
⎛ ⎞+= =⎜ ⎟⎜ ⎟
⎝ ⎠∑ ,
where ˆpγ = 1
1 ( )( )T
t t pt p
w w w wT −
= +
− −∑
Thus 2 22 2 1 2| |
2j j
j
u uc+⎛ ⎞+
=⎜ ⎟⎜ ⎟⎝ ⎠
is a natural estimator of the spectrum at frequency
ωj.
Lecture 9 ‐ 14, July 21, 2008
ˆ ˆm
j jj m
K γ=−
Ω = ∑ = 2 2( 1)/22 2 12
0 11 2
Tj j
jj
u uk u k
−+
=
⎛ ⎞++ ⎜ ⎟⎜ ⎟
⎝ ⎠∑
is then a weighted average (with weights k of the estimated spectra at difference frequencies. Because the goal is to estimate at ω = 0, the weights should concentrate around this frequency as T gets large. Jargon: Kernel (K-weights, sometimes k-weights), Lag Window (K-weights, RATS follows this notation and calls these “lwindow” in the code), Spectral Window (k-weights)
Lecture 9 ‐ 15, July 21, 2008
Truncated Kernel: Kj = 1(|j| ≤ m)
Spectral Weight (k) (m = 12, T=250)
Lecture 9 ‐ 16, July 21, 2008
Newey-West (Bartlet Kernel): Kj = 1− |1
jm|+
Spectral Weight (k) (m = 12, T=250)
Lecture 9 ‐ 17, July 21, 2008
Implementing Estimator (lag order, kernel choice and so forth) Parametric Estimators: AR/VAR approximations: Berk (1974) Ng and Perron (2001), lag lengths in ADF tests. Related, but different issues. Shorter lags: Larger Bias, Smaller Variance Longer lags: Smaller Bias, Larger Variance Practical advice: Choose lags a little longer than you might otherwise (rational given below).
Lecture 9 ‐ 18, July 21, 2008
How Should m be chosen? Andrews (1991) and Newey and West (1994): minimize mse(Ω̂ − Ω) MSE = Variance + Bias2
2 2( 1)/22 2 12
0 11
ˆ2
Tj j
jj
u uk u k
−+
=
⎛ ⎞+Ω = + ⎜ ⎟⎜ ⎟
⎝ ⎠∑
Variance: Spread the weight out over many of the squared u’s: Make the spectral weights flat.
Bias: E(Ω̂) depends on the values E 2 22 2 1
2j ju u +⎛ ⎞+
⎜ ⎟⎜ ⎟⎝ ⎠
≈ S(j×2π/T). So the bias
depends on how flat the spectrum is around ω = 0. The more flat it is, the smaller is the bias. The more curved it is, the higher is the bias.
Lecture 9 ‐ 19, July 21, 2008
Spectral Window (kj) for N-W/Bartlett Lag Window (Kj = 1− |1
jm|+
), T = 250. m = 5 m = 10
m = 20 m = 50
Lecture 9 ‐ 20, July 21, 2008
MSE minimizing values for wt = φwt−1 + et, Bartlett Kernel (note … the higher is φ, more serial correlation, more “curved” spectrum around ω = 0, more bias for given value of m.) m* = 1.1447×41/3×(φ2)1/3×T1/3 = 1.82×φ2/3×T1/3
φ m* T 100 400 1000
0.00 0 0 0 0 0.25 0.72×T1/3 3 5 7 0.50 1.15×T1/3 5 8 11 0.75 1.50×T1/3 6 11 15 0.90 1.70×T1/3 7 12 17 0.95 1.76×T1/3 8 12 17
Lecture 9 ‐ 21, July 21, 2008
Kernel Choice: Some Kernels (lag windows)
Lecture 9 ‐ 22, July 21, 2008
Does Kernel Choice Matter? In theory, yes. (QS is optimal) In practice (for psd kernels), not really.
Lecture 9 ‐ 23, July 21, 2008
Does lag length matter? Yes, quite a bit … yt = β + wt, wt = φwt−1 + et, T = 250. Rejection of 2-sided 10% test φ ˆ *m
AR ˆ *m (PW) m* 2×m*
0.00 0.10 0.11 0.11 0.10 0.10 0.25 0.13 0.11 0.11 0.13 0.13 0.50 0.15 0.11 0.11 0.15 0.14 0.75 0.21 0.12 0.12 0.21 0.18 0.90 0.33 0.15 0.15 0.33 0.25 0.95 0.46 0.20 0.19 0.46 0.37
(More Simulations: den Haan and Levin (1997) … available on den Haan’s web page … and other papers listed throughout these slides)
Lecture 9 ‐ 24, July 21, 2008
Why use more lags than MSE Optimal (Sun, Phillips, and Jin (2008) ) Intuition: z ~ N(0,σ2) , 2σ̂ an estimator of σ2 (assumed independent of z)
( ) ( ) ( )
( ) ( )
21
22 2 2 2 2
2
2 2 2 2 2 2 2 2
2 2
ˆ ˆ ˆPr Pr 1( ) ( )ˆ
1ˆ ˆ( ) ( ) '( ) ( ) ''( )2
1ˆ ˆ( ) ( ) ' ( ) ''2
z c z c E z c E g
E g E g E g
F c Bias g MSE gχ
σ σ σσ
σ σ σ σ σ σ σ
σ σ
⎛ ⎞< = < = < =⎜ ⎟
⎝ ⎠
≈ + − + −
= + × + ×
MSE … Bias2 + Variance This formula … Bias and MSE … Thus m should be bigger
Lecture 9 ‐ 25, July 21, 2008
IV. Making m really big m = bT where b is fixed. (Kiefer, Vogelsang and Bunzel (2000), Kiefer and Vogelsang (2002), Kiefer and Vogelsang (2005))
Bartlett … m = (T−1): ˆ ˆ( )wΩ = 1
1
| | ˆ(1 )T
jj T
jT
γ−
=− +
−∑
1
1ˆ ˆˆ ˆT
j t t j jt j
w wT
γ γ− −= +
= =∑ , and where ˆ t tw w w= − .
KV (2002) show that some rearanging yields : 2
1 1/2
1 1
ˆ ˆ ˆ( ) 2T t
jt j
w T T w− −
= =
⎛ ⎞Ω = ⎜ ⎟
⎝ ⎠∑ ∑
But, [ ]
1/2 1/2
1( )
sT d
jj
T w W s−
=
→Ω∑ , [ ]
1/2 1/2
1
ˆ ( ( ) (1))sT d
jj
T w W s sW−
=
→Ω −∑ , and [ ]
1/2
1
ˆ ( )T
jj
T w ξ⋅
−
=
⇒ ⋅∑ , where ξ(s) = Ω1/2(W(s)−sW(1)).
Lecture 9 ‐ 26, July 21, 2008
Thus
ˆ ˆ( )wΩ : 2 1
1 1/2 2
1 1 0
ˆ ˆ ˆ( ) 2 2 ( ( ) (1))T t d
jt j
w T T w W s sW ds− −
= =
⎛ ⎞Ω = →Ω −⎜ ⎟
⎝ ⎠∑ ∑ ∫
So that ˆ ˆ( )wΩ is a “robust,” but inconsistent estimator of Ω. Distributions of “t-statistics” (or other statistics using this Ω̂) will not be asymptotically normal, but large sample distributions easily tabulated (see KV (2002)).
Lecture 9 ‐ 27, July 21, 2008
Table of t cricitical values
F-critical values .. See KVB (DIVIDE BY 2 !)
Lecture 9 ‐ 28, July 21, 2008
Power “loss” (From KVB)
Lecture 9 ‐ 29, July 21, 2008
Size Control (Finite Sample AR(1))
Lecture 9 ‐ 30, July 21, 2008
(a) Advantages ... (i) Some what better size control (in Monte Carlos and in theory (Jansson (2004)). (ii) More “Robust” to serial correlation (Müller (2007). Müller proposes estimators that yield “t-statistics” with t-distributions. (b) Disadvantages ... (i) Wider confidence intervals ... In the one-dimensional regression problem a value of β–β0 chosen so that β is contained in the (true, not size-distorted) 90% HAC confidence interval with probability 0.10. The value of β is contained in the KVB 90% CI with probability 0.23. (Based on an asymptotic calculation) (ii) Robustness to volatility breaks ... No ....
Lecture 9 ‐ 31, July 21, 2008
KVB Cousins: (Some related work) (a) (Müller (2007): Just focus on low frequency observations. How many do you have? Periodogram ordinates at 2 j
Tπ , j = 1, 2, … (T-1)/2
Let p denote a low frequency “cutoff” … frequencies with periods greater than p are ω ≤ 2
pπ . Solving … number of periodogram ordinates with
periods ≥ p: Tp
60 years of data, p = 6 years, number of ordinates = 10. (Does not depend on sampling frequency). Müller : do inference based on these obs. Appropriately weighted these yield t-distributions for t-statisics.
Lecture 9 ‐ 32, July 21, 2008
(b) Panel Data: Hansen (2007). “Clustered” SEs (over T) when T is large and n is small (T → ∞ and n fixed). This too yields “t-statistics” being distributed t (with n−1 df) and appropriately constructed “F-stats” having Hotelling t-squared distribution. (c) Ibragimov and Müller (2007). Allows heterogeneity.
Lecture 9 ‐ 33, July 21, 2008
HAC Bottom line(s) (1) Parametric Estimators are easy and tend to perform reasonably well. In some applications a parametric model is natural (MA(q) model for forecasting q+1 periods ahead). In other circumstances VAR-HAC is sensible. (den Haan and Levin (1997).) Think about changes in volatility. (2) Why are you estimating Ω? (a) Optimal Weighting matrix in GMM: Minimum MSE Ω̂ seems like a good idea. (Analytic results on this ?) (b) Inference: Use more lags than you otherwise would think you should in Newey-West (or other non-parametric estimators). Worried? Use KVB estimators (and their critical values for tests) or Müller (2007) versions (with t or F critical values).