0% found this document useful (0 votes)
101 views19 pages

ARFIMA Model Difference Test

This document describes a test for the difference parameter (d) of the autoregressive fractionally integrated moving average (ARFIMA) model. The test uses the moving blocks bootstrap method applied to estimates of d obtained through smoothed spectral regression estimation. Monte Carlo studies show that for certain block sizes, the test is generally valid and has reasonably good power at discriminating between ARFIMA processes with different values of d.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views19 pages

ARFIMA Model Difference Test

This document describes a test for the difference parameter (d) of the autoregressive fractionally integrated moving average (ARFIMA) model. The test uses the moving blocks bootstrap method applied to estimates of d obtained through smoothed spectral regression estimation. Monte Carlo studies show that for certain block sizes, the test is generally valid and has reasonably good power at discriminating between ARFIMA processes with different values of d.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

ISSN 1440-771X

ISBN 0 7326 1066 4

M O N A S H U N I V E R S I T Y

AUSTRALIA

A Test for the Difference Parameter of the ARIFMA Model

Using the Moving Blocks Bootstrap

Elizabeth Ann Maharaj

Working Paper 11/99

September 1999

DEPARTMENT OF ECONOMETRICS

AND BUSINESS STATISTICS

A TEST FOR THE DIFFERENCE PARAMETER OF THE ARFIMA

MODEL USING THE MOVING BLOCKS BOOTSTRAP

Elizabeth Ann Maharaj

Department of Econometrics and Business Statistics

Monash University

Australia

Abstract

In this paper we construct a test for the difference parameter d in the fractionally integrated

autoregressive moving-average (ARFIMA) model. Obtaining estimates by smoothed spectral

regression estimation method, we use the moving blocks bootstrap method to construct the

test for d. The results of Monte Carlo studies show that this test is generally valid for certain

block sizes, and for these block sizes, the test has reasonably good power.

Keywords: Long memory, Periodogram regression, Smoothed periodogram regression,

Block size.

1
1 Introduction

During the last two decades there has been considerable interest in the application of long

memory time series processes in many fields, such as economics, hydrology and geology.

Hurst (1951), Mandelbort (1971) and McLeod and Hipel (1978) among others, observed the

long memory property in time series while working with different data sets. They observed

the persistence of the observed autocorrelation function, that is, where the autocorrelations

take far longer to decay than that of the same associated ARMA class. Granger (1978) was

one of the first authors who considered fractional differencing in time series analysis.

Based on studies by McLeod and Hipel (1978) which are related to the original

studies of Hurst (1951), a simple method for estimating d based on re-scaled adjusted range

known as the Hurst coefficient method, was considered by various authors. However,

Hosking (1984) concluded that this estimator of d cannot be recommended because the

estimator of the equivalent Hurst coefficent is biased for some values of the coefficient and

has large sampling variability.

Granger and Joyeux (1980) approximated the ARFIMA model by a high-order

autoregressive process and estimated the difference parameter d by comparing variances for

each different choice of d. Geweke and Porter-Hudak (1983) and Kashyap and Eom (1988)

used a regression procedure for the logarithm of the periodogram to estimate d. Hassler

(1993) suggested an estimator of d based on the regression procedure for the logarithm of the

smoothed periodogram. Chen et al. (1993) and Reisen (1994) also considered estimators of d

using the smoothed periodogram. Simulated results by these authors show that the smoothed

periodogram regression estimator of d performs much better than the corresponding

periodogram estimator of d. This is because, while the periodogram is asymptotically

unbiased, it is not a consistent estimator of the spectral density function whereas the smoothed

peroidogram is.

2
The maximum likelihood estimation method of d is another popular method where all

other parameters present in the model are estimated together with d (see Cheung and Deibold

(1994), Sowell (1992) and Hosking (1984)).

Hypothesis tests for d have been considered by various authors. Davies and Harte

(1987) constructed tests for the Hurst coefficient (which is a function of difference parameter

d) based on the beta-optimal principle, local optimality and the re-scaled range test. However

these tests were restricted to testing for fractional white noise and fractional first-order

autoregressive processes. Kwiatkowski et al. (1993) also considered tests for fractional white

noise but these tests were based on the unit root approach.

Agiakloglou et al. (1993) have shown that if either the autoregressive or moving

average operator in the ARFIMA model contains large roots, the periodiogram regression

estimator of d will be biased and the asymptotic test based on it will not produce good results.

Agiakloglou and Newbold (1994) developed Lagrange multiplier tests for the general

ARFIMA model. However before testing for d, the order of ARMA model fitted to the series

must be known. This can poise a problem since the order of the fitted model is influenced by

the value of the difference parameter.

Hassler (1993) considered tests based on the asymptotic results of Kashyap and Eom

(1988), of the periodogram regression estimator of d. However he showed that this test is only

valid if the series are generated from a fractional white noise process. He also considered

tests based on the empirical distribution of the smoothed periodogram regression estimator of

d and concluded that this test is superior to the test based on the periodogram regression

estimator of d, for discriminating between ARFIMA(p, 0, 0) and ARFIMA(p, d, 0) processes.

Reinsen (1994) considered tests based on the asymptotic results of Geweke and

Porter-Hudak (1983), of the periodogram regression estimators of d, as well as asymptotic

results based smoothed periodogram regression estimators of d. He concluded from simulated

results that smoothed periodogram regression may be superior to periodogram regression for

discriminating between ARFIMA(p, 0, q) and ARFIMA(p, d, q) processes.

3
The small sample distribution of the estimators of d is unknown in any estimation

method. In this paper we develop a non-parametric test based on the moving blocks bootstrap

(MBB) method, using the smoothed spectral regression estimator of d. The test can be applied

to both small and large samples and does not depend on the distribution of the estimator of d.

In Section 2, we briefly describe the estimation of d using the periodogram and

smoothed periodogram regression methods. We describe the moving blocks bootstrap test in

Section 3, while in Section 4, we outline the experimental design and discuss the results of the

simulation studies.

2 Regression Estimators of d

Using the notation of Box and Jenkins (1970), the autoregressive integrated moving average

process, ARIMA(p,d,q), is defined as

f(B)(1 – B)dXt= q(B)Zt (2.1)

where B is the back-shift operator, Zt is a white noise process with mean zero and variance s2

and f (B) = 1 - f1(B) - … -fpBp and q(B) = 1 - q1B -… - qqBq are stationary autoregressive

and invertible moving average operators of order p and q respectively. Granger and Joyeux

(1980) and Hosking (1981) extended the model (2.1) by allowing d to take fractional values

in the range (-0.5, 0.5). They expanded (1 – B)d using the binomial expansion

(1 - B )d = � ��� ���(- B )k .
¥
d
(2.2)
k
k =0 Ł ł

Geweke and Porter-Hudak (1983) obtained the periodogram estimator of d as follows. The

spectral density function of the ARIMA(p,d,q) process is

f (w ) = f u (w ){2 sin (w 2 )}
-2d
(2.3)

where f u (w ) is the spectral density of the ARMA(p,q) process. Hence

( )
s2 q e
-iw 2
� � w ��
f (w ) = �2 sin� � � , w ˛ (- p,p )
( )
2p f e -iw 2
� Ł 2 ł�

4
As w fi 0, lim{w2d f(w)} exists and is finite. Given a sample of size t, and given that that wj =

2pj/T, (j = 1, 2 , . . . , T/2) is a set of harmonic frequencies, taking the logarithm of (2.3) gives

2
� �w ��
ln{f (w j )} = ln{f u (w j )}- d ln �2 sin �� j �� � .
� Ł 2 ł�

This may be written as

� f (w ) �
2
� �w ��
ln{f (w j )} = ln{ f u (0 )} - d ln �2 sin �� j �� � + ln�� u j �� . (2.4)
� Ł 2 ł� Ł f u (0 ) ł

For a given series x1, x2, . . . , xT the periodogram is given by

1 � T -1

I x (w ) = � R (0 ) + 2 � R(k ) cos (kw )�, w ˛ (- p,p ) . (2.5)
2p � k =1 �

Adding Ix(wj) to both sides of equation (2.4) gives

� f (w ) � � I (w j ) �
2
� �w ��
ln{I (w j )} = ln{ f u (0)} - d ln �2 sin�� j �� � + ln�� u j �� + ln� �.
� f (w ) �
(2.6)
� Ł 2 ł� Ł f u (0) ł Ł j ł

If the upper limit of j, say g, is chosen so that g/T fi 0 and if wj is close to zero, then the term

ln( f u (w j ) f u (0)) becomes negligible. Equation (2.6) can then be written as a simple

regression equation

yj = a + bxj + uj, j = 1, 2 , . . . , g (2.7)

where yj = ln{I(wj)}, xj = ln{2sin(wj/2)}, uj = ln{I(wj)/f(wj)}+c, b = -d, a = ln{fn(0)} – c and

c = E[-ln{I(wj)/f(wj)}]. The estimator of d is then –b, where b is the least squares estimate of

the slope of the regression equation (2.7), that is

� (x - x )y j
g

b̂ =
j=1
.
� (x - x)
g
2
j
j =1

5
Hassler (1993), Chen et al. (1993) and Reisen (1994) all independently suggested an estimate

of d based on the regression procedure for the logarithm of the smoothed periodogram. The

smoothed periodogram estimate of the spectral density function is

1 � m

f̂ x (w) = �R (0)l0 + 2� lk R (k ) cos (kw )�, w ˛ (- p ,p ) , (2.8)
2p � k =1 �

where {lk} are a set of weights called the lag window, and m < T is called the truncation

point. The smoothed periodogram estimates are consistent if m is chosen so that as T fi ¥ and

m fi ¥, m/T fi 0. While the periodogram estimates are asymptotically unbiased they are not

consistent. Various lag windows can be chosen to smooth the periodogram. The Parzen

window is given by

� �k�
2
�k�
3

� 1 - 6� � + 6� � , 0£k £m 2,
� Łmł Łmł
l k =�
� � k�
3

� 2�1- � , m 2£k £m,


� Ł mł

and has the desirable property that it always produces positive estimates of the spectral

density function.

Equation (2.6) can then be written as

� f (w ) � � f̂ (w j ) �
2

{
ln f̂ (w j ) } � �w
= ln{ f u (0)}- d ln �2 sin�� j
��
�� � + ln�� u j �� + ln�
( ) � ( )
�.

(2.9)
� Ł 2 ł� Ł uf 0 ł Ł f w j ł

Then equation (2.9) can be expressed as equation (2.7) with y j = ln f̂ w j { ( )} and


{ }
u j = ln f̂ (w j ) / f (w j ) . The smoothed spectral regression estimator of d is then

� (x - x )y j
g

d̂ = b̂ =
j=1
. (2.10)
� (x - x)
g
2
j
j =1

Geweke and Porter-Hudak (1983) showed that the periodogram regression estimator

of d is asymptotically normally distributed with mean d and variance

6
p2
,
6� (x j - x )
g
2

j=1

while Kashyap and Eom (1988) showed that that periodogram regression estimator of d is

asymptotically normally distributed with mean d and variance T 1 2 . The smoothed

periodogram regression estimator is also asymptotically normally distributed with mean d. In

particular for a Parzen window, Reisen (1994) showed that the variance of this estimator is

approximately

0.53928 m
.
T � (x j - x )
g
2

j=1

3 Moving Blocks Bootstrap Test

3.1 Methodology

Efron (1979) initiated the standard bootstrap and jackknife procedures based on independent

and identically distributed observations. This set-up gives a better approximation to the

distribution of statistics compared to the classical large sample approximations (Bickel and

Freedman 1981, Singh 1981, Babu 1986). It was pointed out by Singh (1981) that when the

observations are not independent, as is the case for time series data, the bootstrap

approximation may not be good. In such a situation the ordinary bootstrap procedure fails to

capture the dependency structure of the data.

A general bootstrap procedure for weakly stationary data, free of specific modelling,

has been formulated by Kunsch (1989). A similar procedure has been suggested

independently by Liu and Singh (1988). This procedure, referred to as the moving blocks

bootstrap method, does not require one to first fit a parametric or semi-parametric model to

the dependent data. The procedure works for arbitrary stationary processes with short range

dependence. Moving blocks bootstrap samples are drawn from a stationary dependent data set

as follow: Let X1, X2, . . . , XT be a sequence of stationary dependent random variables with

7
common distribution function for each Xi. Let B1, B2, . . . , BT-b+1 be the moving blocks where

b is the size of each block. Bj stands for the jth block consisting of b consecutive observations,

that is Bj = {xj, xj+1, . . . , xj+b-1}. k independent block samples are drawn with replacement

from B1, B2, . . . , BT-b+1. All observations of the k-sampled blocks are then pasted together in

succession to form a bootstrap sample. The number of blocks k is chosen so that T @ kb. With

the moving blocks bootstrap, the idea is to choose a block size b large enough so that the

observations more than b units apart are nearly independent. We simply cannot resample from

the individual observations because this would destroy the correlation that we are trying to

capture. However by sampling the blocks of length b, the correlation present in observations

less than b units apart is retained. The choice of block size can be quite important. Hall et al.

(1995) addressed this issue. They pointed out that optimal block size depends on the context

of the problem and suggested some data driven procedures.

Liu and Singh (1988) have shown that for a general statistic that is a smooth

functional, the moving blocks bootstrap procedure is consistent, assuming that bfi ¥ at a rate

such that b/Tfi 0 as Tfi ¥.

Variations of the moving blocks bootstrap, in particular subsampling of blocks have

been discussed by Politis et al. (1997). However this procedure is far too computationally

intensive and has not been considered here.

3.2 Hypothesis Testing

It has been suggested by Hinkley (1988) and Romano (1988) that when bootstrap methods are

used in hypothesis testing, the bootstrap test statistic should be in the form of a metric.

Romano (1988) has shown that test statistics in the form of metrics yield valid tests against all

alternatives.

To test H0 : d = 0 against H1: d „ 0, we use the test statistic W = d̂ - d , where d̂ is

the smoothed periodogram regression estimate based on the Parzen window (Equation 2.10).

Even though the consistency results of Liu and Singh (1988) will not strictly apply here, since

8
the estimator of d is not a smooth functional, we will nevertheless obtain a moving blocks

bootstrap estimator of d and use it in the hypothesis testing procedure that follows.

The bootstrap provides a first-order asymptotic approximation to the distribution of W

under the null hypothesis. Hence, the null hypothesis can be tested by comparing W to a

bootstrap-based critical value. This is equivalent to comparing a bootstrap-based p-value to a

nominal significance level a. Consider the sample x1, x2, . . . , xT. The bootstrap-based p-value

is obtained by the following steps: (1) Sample with replacement k times from the set {B1, B2,

. . . , BT-b+1}. This produces a set of blocks { B1* , B*2 .. . , B*k }, which when laid end-to-end forms

a new time series, that is the bootstrap sample of length T, x1* , x *2 ,.. . , xT* . (2) Using this

sample, the test statistic W * = d̂ * - d̂ is calculated. d̂ * is the bootstrap estimate of d and is

obtained from Equation (2.10) with yj obtained from x1* , x *2 ,.. . , xT* . Steps (1) and (2) are then

repeated J times. The empirical distribution of the J values of W * is the bootstrap estimate of

the distribution of W. The bootstrap-based p-value p* is an estimate of the p-value that would

be associated with test statistics W, and it is obtained as follows: p* = # (W* > W)/J. For a

nominal significance level a, H0 is rejected if p* < a.

Note that choosing d̂ * - d̂ and not d̂ * - d as the bootstrap test statistic has the

effect of increasing the power of the test (see Hall and Wilson (1991)). Beran (1988) and

Fisher and Hall (1990) suggest that a bootstrap test should be based on pivotal test statistics

because it improves the accuracy of the test. However since the standard deviation of d̂ *

cannot be obtained from the sample, a pivotal test is not possible in this case.

4 Simulation Studies

4.1 Experimental Design

Time series of lengths T =100 and 300 were generated from the ARFIMA(p,d,q) processes,

using the method suggested by Hosking (1981). The white noise process is assumed to be

normally distributed with mean 0 and standard deviation 1. With the parameter d ˛ [-0.45,

9
0.45] in increments of 0.15, the series were generated from ARFIMA(1,d,0), with f = 0, 0.1,

0.3, 0.5, 0.7, ARIFMA(0,d,1) with q = 0, 0.1, 0.3, 0.5, 0.7, and ARIFMA(1,d,1) with f = -0.6,

q=0.3. These models were chosen, so that both first and second order processes as well as a

range of parameter values would be considered. Block sizes of the order b = Tk , 0.3 < k < 0.8

in increments of 0.05 were considered. This range of k values was selected to ensure that the

blocks did not contain too few observations or too many observations.

Estimates of d were obtained by the method of smoothed periodogram regression

using the Parzen window. The number of regression terms i in Equation (2.10 ) was chosen to

be Tg, where g = 0.5 while the truncation point in the Parzen lag window was chosen to be

Tm, where m = 0.7. (Chen et al. (1993) and Reisen (1994) showed that these values of g and m

produced good estimates of d in their simulation studies.) Other values of g and m in the

range (0,1) where also trialed, but were found to produce poor estimates of d.

A total of 500 moving blocks bootstrap replications were generated for each Monte

Carlo trial which was repeated 1000 times for each case. All programming was done in

Gauss.

4.2 Discussion

For both T = 100 and 300 with block sizes b = Tk , 0.3 < k < 0.6, the size of the test was

mostly underestimated. However for T = 100 with block size b = T0.6 = 16, for series

generated from the processes AR(1) , f = 0.5 and MA(1), q = 0.5, the size estimates were

fairly close to the nominal levels of significance. For block sizes b = T0.65 = 20 and b = T0.7 =

25, the size estimates were fairly close to the nominal levels of significance for series

generated from all other selected processes, except for AR(1) , f = 0.5, 0.7 and MA(1), q =

0.5, 0.7 for which it was overestimated. For T = 300, with block sizes b = T0.65 = 41 and b =

T0.7 = 54, the size estimates were fairly close to the nominal levels of significance for series

generated from all other selected processes except for AR(1) , f = 0.7 and MA(1), q = 0.7.

For these processes, size was overestimated. For both T = 100 and 300 with block size

10
b = Tk, 0.7 < k < 0.8, the size of the test was considerably overestimated in all cases. Some of

the size estimates are given in Tables 1 and 2.

<Tables 1 and 2 >

Power estimates for both T = 100 and 300, were obtained for block sizes b = T0.65 and

T0.7, which had produced reasonably good size estimates. These power estimates for b = T0.65

are given in Tables 3 and 4. For T = 100, it can be seen that the test has fairly good power as d

approaches 0.45. However as d approaches -0.45, the increase in power is not as good,

especially for series generated from the AR(1), f = 0.5 and the ARMA processes. Similar

observations were made for T = 100 with b = T0.7.

For T = 300 with b = T0.65, it can be seen that the test has fairly good power as d

approaches both 0.45 and -0.45 except when the series are generated from the ARMA process

with d approaching –0.45. Similar observations were made for T = 300 with b = T0.7.

<Table 3 and 4 >

Overall then, it appears that our test performs reasonably well for block sizes b = T0.65

and T0.7. However as the parameter values of the autoregressive and moving average

processes from which the series are generated, tend to the boundary value of 1 (that is, the

series approach non-stationarity), the test tends to become invalid. However it takes longer

for this to happen for T =300 than for T = 100. As expected the power of the test improves

with increasing series length. It would appear then from this simulation study that the optimal

block size for the purposes of testing for d, is in the interval [T0.65 , T0.7].

To compare this test for d, to the asymptotic test for d which is based on the

smoothed spectral regression estimates, as considered by Reinsen (1993), series where

generated from some of the processes mentioned above for g = 0.5 and for m = 0.7 and 0.9.

Reinsen (1993) showed that for m = 0.9 and g = 0.5, this test has fairly good power. However

our results in Table 5, clearly show that this test is not valid since its size is considerably

overestimated. Since our test is generally valid for certain block sizes and since it has

11
reasonably good power for these block sizes, it would appear to be more reliable than this

asymptotic test.

<Table 5>

5 Concluding Remarks

Overall this method of testing for d using the moving blocks bootstrap appears to produce

fairly good results for certain block sizes. That is, in most cases, for these block sizes, the test

is generally valid with reasonably good power. We believe that this test has an advantage over

the other tests in the literature because it is free of any of the restrictions that are imposed on

these other tests, and it can adequately differentiate between ARFIMA (p,d,q) and

ARFIMA(p,0,q) models.

Acknowledgments

This work was supported by a grant from the Monash University Research Fund. Thanks to

Mizanur Laskar for the programming assistance and to John Nankervis for the useful

discussions on ARFIMA models and the bootstrap.

References

Agiakloglou, C., Newbold, P. and Wohar, M. (1993) Bias in an estimator of the fractional

difference parameter. Journal of Time Series Analysis 14, 3, 235-246.

Agiakloglou, C. and Newbold, P. (1994) Langrange multiplier tests for fractional difference.

Journal of Time Series Analysis 15, 3, 251–262.

Babu, G. J. (1986) Bootstraping statistics with linear combinations of chi-square as a weak

limit. Sankhya A, 56, 85-93.

Beran, R. (1988) Prepivoting test statistics: A bootstrap view and asymptotic refinements.

Journal of the American Statistical Association 83, 687-697.

12
Bickel, P. J. and Freedman, D. A. (1981) Some asymmetric theory for the bootstrap. Annals

of Statistics, 9, 1196-1217.

Box G. E. and Jenkins, G. M. (1970) Time series analysis, forecasting and control. San

Francisco, CA: Holden-Day.

Chen, G., Abraham, B. and Peiris, S. (1993) Lag window estimation of the degree of

differencing in fractionally integrated time series model. Journal of Time Series

Analysis 15, 473-487.

Cheung, Y. and Diebold, F. X. (1994) On maximum likelihood estimation of the differencing

parameter of fractionally-integrated noise with unknown mean. Journal of

Econometrics 62, 301-316.

Davies, R. B. and Harte, D. S. (1987) Tests for Hurst effect. Biometrika 1, 95-101.

Efron, B. (1979) Bootstrap methods: Another look at the jackknife. Annals of Statistics 7,

1-26.

Fisher, N. I. and Hall, P.(1990) On bootstrap hypothesis testing. Australian Journal of

Statistics, 1977-1990.

Geweke, J. and Porter-Hudak, S. (1983) The estimation and application of long memory time

series models. Journal of Time Series Analysis 4, 221-238.

Granger, C. W. J. (1978) New classes of time series models. Statistician 27, 237-253.

Granger, C. W. J. and Joyeux, R. (1980) An introduction to long memory time series models

and fractional differencing. Journal of Time Series Analysis 1, 15-39.

Hall, P., Horowitz, J. L. and Jing, B. (1995) On blocking rules for the bootstrap with

dependent data. Biometrika 82, 561-574.

Hall, P. and Wilson, S. R. (1991) Two guidelines for bootstrap hypothesis testing.

Biometrics 47, 757-762.

Hassler, U. (1993) Regression of spectral estimators with fractionally integrated time series.

Journal of Time Series Analysis 14, 369-379.

Hinkley, D. V. (1988) Bootstrap methods. Journal of the Royal Statistical Society. 50,

321-337.

13
Hosking, J. R. M. (1981) Fractional differencing. Biometrika 68, 165-76.

Hosking, J. R. M. (1984) Modelling persistance in hydrological time series using fractional

differencing. Water Resources Research 20, 1898-1908.

Hurst, H. E. (1951) Long-term storage capacity of reservoirs. Transactions of the American

Society of Civil Engineers 116, 770-799.

Kashyap, R. L. and Eom, K. B. (1988) Estimation in long-memory time series models.

Journal of Time Series Analysis 9, 35-41.

Kunsch, H. R. (1989) The jacknife and bootstrap for general stationary observations. Annals

of Statistics 17, 1217-1241.

Kwiatkowski, D., Phillips, P. C. B., Schmidt, P. and Shin, Y. (1992), Testing the null

hypothesis of stationarity against the alternative of a unit root: How sure are we that

economic time series have a unit root? Journal of Econometrics 54, 159-178.

Liu, R. Y. and K. Singh (1988) Exploring the Limits of the Bootstrap. (Editor: R. Lepage and

L. Billard) John Wiley and Sons.

Mandelbort, B. B. (1971) A first fractional Gaussian noise generator. Water Resource

Research 7, 543-553.

McLeod, A. I. and Hipel, K. W. (1978) Preservation of the rescaled adjusted range 1: A

reassessment of the Hurst phenomenon. Water Resource Research 14, 491-508.

Politis, D. N., Romano, J. P. and Wolf, M. (1997) Subsampling from heteroskedastic time

series. Journal of Econometrics 81, 281-317.

Reisen, V. A. (1994) Estimation of the fractional differencing parameter in the ARIMA(p,d,q)

model using the smoothed periodogram. Journal of Time Series Analysis 15, 335-350.

Romano, J. (1988) A bootstrap revival and some nonparametric distance tests. Journal of the

American Statistical Association 83, 698-708.

Singh, K. (1981) On the asymptotic accuracy of Efron’s bootstrap. Annals of Statistics 9,

1187-1195.

Sowell, F. (1992) Maximum likelihood estimation of stationary univariate fractionally

integrated time series models. Journal of Econometrics 53, 165-18.

14
Table 1 Size Estimates for H0: d = 0, H1 d „ 0, for T = 100
AR(1) MA(1) ARMA(1,1)
Block size Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
k Tk Level
0.60 16 10% 0.073 0.075 0.083 0.138 0.073 0.073 0.130 0.049
5% 0.025 0.025 0.032 0.069 0.025 0.031 0.053 0.013
1% 0.003 0.003 0.004 0.011 0.002 0.001 0.007 0.000
0.65 20 10% 0.122 0.124 0.132 0.190 0.120 0.127 0.191 0.095
5% 0.060 0.062 0.067 0.114 0.062 0.068 0.109 0.042
1% 0.012 0.011 0.014 0.030 0.013 0.150 0.026 0.005
0.70 25 10% 0.136 0.141 0.144 0.201 0.139 0.143 0.226 0.118
5% 0.079 0.080 0.080 0.136 0.084 0.088 0.132 0.052
1% 0.019 0.022 0.027 0.049 0.020 0.020 0.039 0.006

Table 2 Size Estimates for H0: d = 0, H1 d „ 0, for T = 300


AR(1) MA(1) ARMA(1,1)
Block size Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
k Tk Level
0.60 31 10% 0.063 0.067 0.071 0.084 0.063 0.060 0.067 0.035
5% 0.022 0.022 0.022 0.038 0.020 0.019 0.025 0.007
1% 0.003 0.003 0.003 0.004 0.002 0.002 0.003 0.000
0.65 41 10% 0.098 0.095 0.096 0.111 0.097 0.095 0.101 0.065
5% 0.045 0.043 0.042 0.053 0.043 0.045 0.050 0.024
1% 0.004 0.003 0.005 0.004 0.004 0.005 0.003 0.000
0.70 54 10% 0.119 0.118 0.117 0.132 0.123 0.119 0.132 0.099
5% 0.062 0.065 0.072 0.070 0.064 0.062 0.078 0.046
1% 0.020 0.021 0.020 0.022 0.019 0.019 0.024 0.013

1
Table 3 Power Estimates for H0: d = 0, H1 d „ 0, for T = 100, Block size T0.65 = 20
AR(1) MA(1) ARMA(1,1)
d Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
Level
-0.45 10% 0.706 0.692 0.609 0.434 0.720 0.773 0.861 0.539
-0.30 0.432 0.422 0.354 0.233 0.454 0.519 0.689 0.396
-0.15 0.224 0.210 0.175 0.124 0.237 0.277 0.412 0.228
0.00 0.122 0.124 0.132 0.190 0.120 0.127 0.191 0.095
0.15 0.255 0.268 0.320 0.458 0.244 0.195 0.145 0.147
0.30 0.603 0.621 0.672 0.783 0.589 0.520 0.392 0.462
0.45 0.884 0.892 0.915 0.947 0.877 0.849 0.791 0.820
-0.45 5% 0.515 0.497 0.438 0.306 0.531 0.579 0.682 0.268
-0.30 0.302 0.289 0.239 0.146 0.311 0.363 0.480 0.188
-0.15 0.134 0.130 0.099 0.058 0.142 0.173 0.273 0.100
0.00 0.060 0.062 0.067 0.114 0.062 0.068 0.109 0.042
0.15 0.175 0.184 0.222 0.342 0.162 0.131 0.073 0.076
0.30 0.478 0.497 0.557 0.672 0.464 0.414 0.290 0.352
0.45 0.823 0.831 0.862 0.908 0.815 0.785 0.683 0.751
-0.45 1% 0.227 0.223 0.188 0.110 0.224 0.218 0.243 0.020
-0.30 0.094 0.088 0.078 0.041 0.098 0.105 0.157 0.013
-0.15 0.038 0.036 0.027 0.011 0.038 0.047 0.068 0.009
0.00 0.012 0.011 0.014 0.030 0.013 0.150 0.026 0.005
0.15 0.066 0.073 0.100 0.160 0.059 0.039 0.013 0.015
0.30 0.298 0.316 0.355 0.435 0.280 0.243 0.161 0.195
0.45 0.662 0.678 0.707 0.772 0.652 0.603 0.522 0.567

2
Table 4 Power Estimates for H0: d = 0, H1 d „ 0, for T = 300, Block size T0.65 = 41
AR(1) MA(1) ARMA(1,1)
d Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
Level
-0.45 10% 0.914 0.914 0.908 0.871 0.914 0.911 0.903 0.557
-0.30 0.684 0.681 0.667 0.560 0.691 0.697 0.735 0.456
-0.15 0.306 0.300 0.279 0.206 0.312 0.336 0.390 0.235
0.00 0.098 0.095 0.096 0.111 0.097 0.095 0.101 0.065
0.15 0.354 0.360 0.394 0.461 0.343 0.312 0.230 0.262
0.30 0.818 0.824 0.846 0.884 0.814 0.793 0.744 0.777
0.45 0.966 0.967 0.969 0.980 0.965 0.965 0.952 0.959
-0.45 5% 0.805 0.810 0.808 0.749 0.795 0.776 0.697 0.209
-0.30 0.527 0.522 0.492 0.408 0.520 0.511 0.494 0.194
-0.15 0.193 0.190 0.169 0.120 0.193 0.201 0.227 0.104
0.00 0.045 0.043 0.042 0.053 0.043 0.045 0.050 0.024
0.15 0.233 0.241 0.278 0.355 0.225 0.197 0.135 0.162
0.30 0.731 0.737 0.758 0.809 0.725 0.694 0.635 0.678
0.45 0.947 0.949 0.951 0.963 0.944 0.939 0.921 0.931
-0.45 1% 0.422 0.454 0.455 0.403 0.399 0.294 0.168 0.008
-0.30 0.194 0.206 0.196 0.148 0.185 0.158 0.112 0.010
-0.15 0.048 0.045 0.040 0.023 0.047 0.046 0.040 0.002
0.00 0.004 0.003 0.005 0.004 0.004 0.005 0.003 0.000
0.15 0.075 0.076 0.091 0.137 0.071 0.057 0.029 0.041
0.30 0.490 0.499 0.529 0.585 0.478 0.439 0.367 0.407
0.45 0.863 0.863 0.879 0.900 0.861 0.846 0.814 0.836

3
Table 5 Size estimates for H0: d = 0, H1 d „ 0, for the asymptotic test
AR(1) MA(1) ARMA(1,1)
Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
T m Level
100 7 10% 0.363 0.365 0.396 0.471 0.359 0.357 0.463 0.368
5% 0.278 0.276 0.290 0.374 0.281 0.287 0.384 0.292
1% 0.163 0.163 0.169 0.240 0.163 0.177 0.238 0.186
9 10% 0.279 0.272 0.275 0.347 0.270 0.270 0.320 0.266
5% 0.188 0.186 0.209 0.258 0.190 0.200 0.241 0.195
1% 0.087 0.087 0.092 0.142 0.088 0.094 0.127 0.097
300 7 10% 0.487 0.490 0.485 0.517 0.491 0.481 0.503 0.487
5% 0.400 0.404 0.411 0.431 0.404 0.414 0.422 0.416
1% 0.272 0.267 0.267 0.297 0.276 0.271 0.300 0.273
9 10% 0.316 0.322 0.306 0.338 0.330 0.316 0.333 0.316
5% 0.219 0.249 0.241 0.244 0.236 0.234 0.242 0.232
1% 0.121 0.121 0.112 0.124 0.123 0.109 0.125 0.114

You might also like