ARFIMA Model Difference Test
ARFIMA Model Difference Test
M O N A S H U N I V E R S I T Y
AUSTRALIA
September 1999
DEPARTMENT OF ECONOMETRICS
Monash University
Australia
Abstract
In this paper we construct a test for the difference parameter d in the fractionally integrated
regression estimation method, we use the moving blocks bootstrap method to construct the
test for d. The results of Monte Carlo studies show that this test is generally valid for certain
block sizes, and for these block sizes, the test has reasonably good power.
Block size.
1
1 Introduction
During the last two decades there has been considerable interest in the application of long
memory time series processes in many fields, such as economics, hydrology and geology.
Hurst (1951), Mandelbort (1971) and McLeod and Hipel (1978) among others, observed the
long memory property in time series while working with different data sets. They observed
the persistence of the observed autocorrelation function, that is, where the autocorrelations
take far longer to decay than that of the same associated ARMA class. Granger (1978) was
one of the first authors who considered fractional differencing in time series analysis.
Based on studies by McLeod and Hipel (1978) which are related to the original
studies of Hurst (1951), a simple method for estimating d based on re-scaled adjusted range
known as the Hurst coefficient method, was considered by various authors. However,
Hosking (1984) concluded that this estimator of d cannot be recommended because the
estimator of the equivalent Hurst coefficent is biased for some values of the coefficient and
autoregressive process and estimated the difference parameter d by comparing variances for
each different choice of d. Geweke and Porter-Hudak (1983) and Kashyap and Eom (1988)
used a regression procedure for the logarithm of the periodogram to estimate d. Hassler
(1993) suggested an estimator of d based on the regression procedure for the logarithm of the
smoothed periodogram. Chen et al. (1993) and Reisen (1994) also considered estimators of d
using the smoothed periodogram. Simulated results by these authors show that the smoothed
unbiased, it is not a consistent estimator of the spectral density function whereas the smoothed
peroidogram is.
2
The maximum likelihood estimation method of d is another popular method where all
other parameters present in the model are estimated together with d (see Cheung and Deibold
Hypothesis tests for d have been considered by various authors. Davies and Harte
(1987) constructed tests for the Hurst coefficient (which is a function of difference parameter
d) based on the beta-optimal principle, local optimality and the re-scaled range test. However
these tests were restricted to testing for fractional white noise and fractional first-order
autoregressive processes. Kwiatkowski et al. (1993) also considered tests for fractional white
noise but these tests were based on the unit root approach.
Agiakloglou et al. (1993) have shown that if either the autoregressive or moving
average operator in the ARFIMA model contains large roots, the periodiogram regression
estimator of d will be biased and the asymptotic test based on it will not produce good results.
Agiakloglou and Newbold (1994) developed Lagrange multiplier tests for the general
ARFIMA model. However before testing for d, the order of ARMA model fitted to the series
must be known. This can poise a problem since the order of the fitted model is influenced by
Hassler (1993) considered tests based on the asymptotic results of Kashyap and Eom
(1988), of the periodogram regression estimator of d. However he showed that this test is only
valid if the series are generated from a fractional white noise process. He also considered
tests based on the empirical distribution of the smoothed periodogram regression estimator of
d and concluded that this test is superior to the test based on the periodogram regression
Reinsen (1994) considered tests based on the asymptotic results of Geweke and
results that smoothed periodogram regression may be superior to periodogram regression for
3
The small sample distribution of the estimators of d is unknown in any estimation
method. In this paper we develop a non-parametric test based on the moving blocks bootstrap
(MBB) method, using the smoothed spectral regression estimator of d. The test can be applied
to both small and large samples and does not depend on the distribution of the estimator of d.
smoothed periodogram regression methods. We describe the moving blocks bootstrap test in
Section 3, while in Section 4, we outline the experimental design and discuss the results of the
simulation studies.
2 Regression Estimators of d
Using the notation of Box and Jenkins (1970), the autoregressive integrated moving average
where B is the back-shift operator, Zt is a white noise process with mean zero and variance s2
and f (B) = 1 - f1(B) - … -fpBp and q(B) = 1 - q1B -… - qqBq are stationary autoregressive
and invertible moving average operators of order p and q respectively. Granger and Joyeux
(1980) and Hosking (1981) extended the model (2.1) by allowing d to take fractional values
in the range (-0.5, 0.5). They expanded (1 – B)d using the binomial expansion
(1 - B )d = � ��� ���(- B )k .
¥
d
(2.2)
k
k =0 Ł ł
Geweke and Porter-Hudak (1983) obtained the periodogram estimator of d as follows. The
f (w ) = f u (w ){2 sin (w 2 )}
-2d
(2.3)
( )
s2 q e
-iw 2
� � w ��
f (w ) = �2 sin� � � , w ˛ (- p,p )
( )
2p f e -iw 2
� Ł 2 ł�
4
As w fi 0, lim{w2d f(w)} exists and is finite. Given a sample of size t, and given that that wj =
2pj/T, (j = 1, 2 , . . . , T/2) is a set of harmonic frequencies, taking the logarithm of (2.3) gives
2
� �w ��
ln{f (w j )} = ln{f u (w j )}- d ln �2 sin �� j �� � .
� Ł 2 ł�
� f (w ) �
2
� �w ��
ln{f (w j )} = ln{ f u (0 )} - d ln �2 sin �� j �� � + ln�� u j �� . (2.4)
� Ł 2 ł� Ł f u (0 ) ł
1 � T -1
�
I x (w ) = � R (0 ) + 2 � R(k ) cos (kw )�, w ˛ (- p,p ) . (2.5)
2p � k =1 �
� f (w ) � � I (w j ) �
2
� �w ��
ln{I (w j )} = ln{ f u (0)} - d ln �2 sin�� j �� � + ln�� u j �� + ln� �.
� f (w ) �
(2.6)
� Ł 2 ł� Ł f u (0) ł Ł j ł
If the upper limit of j, say g, is chosen so that g/T fi 0 and if wj is close to zero, then the term
ln( f u (w j ) f u (0)) becomes negligible. Equation (2.6) can then be written as a simple
regression equation
c = E[-ln{I(wj)/f(wj)}]. The estimator of d is then –b, where b is the least squares estimate of
� (x - x )y j
g
b̂ =
j=1
.
� (x - x)
g
2
j
j =1
5
Hassler (1993), Chen et al. (1993) and Reisen (1994) all independently suggested an estimate
of d based on the regression procedure for the logarithm of the smoothed periodogram. The
1 � m
�
f̂ x (w) = �R (0)l0 + 2� lk R (k ) cos (kw )�, w ˛ (- p ,p ) , (2.8)
2p � k =1 �
where {lk} are a set of weights called the lag window, and m < T is called the truncation
point. The smoothed periodogram estimates are consistent if m is chosen so that as T fi ¥ and
m fi ¥, m/T fi 0. While the periodogram estimates are asymptotically unbiased they are not
consistent. Various lag windows can be chosen to smooth the periodogram. The Parzen
window is given by
� �k�
2
�k�
3
� 1 - 6� � + 6� � , 0£k £m 2,
� Łmł Łmł
l k =�
� � k�
3
and has the desirable property that it always produces positive estimates of the spectral
density function.
� f (w ) � � f̂ (w j ) �
2
{
ln f̂ (w j ) } � �w
= ln{ f u (0)}- d ln �2 sin�� j
��
�� � + ln�� u j �� + ln�
( ) � ( )
�.
�
(2.9)
� Ł 2 ł� Ł uf 0 ł Ł f w j ł
� (x - x )y j
g
d̂ = b̂ =
j=1
. (2.10)
� (x - x)
g
2
j
j =1
Geweke and Porter-Hudak (1983) showed that the periodogram regression estimator
6
p2
,
6� (x j - x )
g
2
j=1
while Kashyap and Eom (1988) showed that that periodogram regression estimator of d is
particular for a Parzen window, Reisen (1994) showed that the variance of this estimator is
approximately
0.53928 m
.
T � (x j - x )
g
2
j=1
3.1 Methodology
Efron (1979) initiated the standard bootstrap and jackknife procedures based on independent
and identically distributed observations. This set-up gives a better approximation to the
distribution of statistics compared to the classical large sample approximations (Bickel and
Freedman 1981, Singh 1981, Babu 1986). It was pointed out by Singh (1981) that when the
observations are not independent, as is the case for time series data, the bootstrap
approximation may not be good. In such a situation the ordinary bootstrap procedure fails to
A general bootstrap procedure for weakly stationary data, free of specific modelling,
has been formulated by Kunsch (1989). A similar procedure has been suggested
independently by Liu and Singh (1988). This procedure, referred to as the moving blocks
bootstrap method, does not require one to first fit a parametric or semi-parametric model to
the dependent data. The procedure works for arbitrary stationary processes with short range
dependence. Moving blocks bootstrap samples are drawn from a stationary dependent data set
as follow: Let X1, X2, . . . , XT be a sequence of stationary dependent random variables with
7
common distribution function for each Xi. Let B1, B2, . . . , BT-b+1 be the moving blocks where
b is the size of each block. Bj stands for the jth block consisting of b consecutive observations,
that is Bj = {xj, xj+1, . . . , xj+b-1}. k independent block samples are drawn with replacement
from B1, B2, . . . , BT-b+1. All observations of the k-sampled blocks are then pasted together in
succession to form a bootstrap sample. The number of blocks k is chosen so that T @ kb. With
the moving blocks bootstrap, the idea is to choose a block size b large enough so that the
observations more than b units apart are nearly independent. We simply cannot resample from
the individual observations because this would destroy the correlation that we are trying to
capture. However by sampling the blocks of length b, the correlation present in observations
less than b units apart is retained. The choice of block size can be quite important. Hall et al.
(1995) addressed this issue. They pointed out that optimal block size depends on the context
Liu and Singh (1988) have shown that for a general statistic that is a smooth
functional, the moving blocks bootstrap procedure is consistent, assuming that bfi ¥ at a rate
been discussed by Politis et al. (1997). However this procedure is far too computationally
It has been suggested by Hinkley (1988) and Romano (1988) that when bootstrap methods are
used in hypothesis testing, the bootstrap test statistic should be in the form of a metric.
Romano (1988) has shown that test statistics in the form of metrics yield valid tests against all
alternatives.
the smoothed periodogram regression estimate based on the Parzen window (Equation 2.10).
Even though the consistency results of Liu and Singh (1988) will not strictly apply here, since
8
the estimator of d is not a smooth functional, we will nevertheless obtain a moving blocks
bootstrap estimator of d and use it in the hypothesis testing procedure that follows.
under the null hypothesis. Hence, the null hypothesis can be tested by comparing W to a
nominal significance level a. Consider the sample x1, x2, . . . , xT. The bootstrap-based p-value
is obtained by the following steps: (1) Sample with replacement k times from the set {B1, B2,
. . . , BT-b+1}. This produces a set of blocks { B1* , B*2 .. . , B*k }, which when laid end-to-end forms
a new time series, that is the bootstrap sample of length T, x1* , x *2 ,.. . , xT* . (2) Using this
obtained from Equation (2.10) with yj obtained from x1* , x *2 ,.. . , xT* . Steps (1) and (2) are then
repeated J times. The empirical distribution of the J values of W * is the bootstrap estimate of
the distribution of W. The bootstrap-based p-value p* is an estimate of the p-value that would
be associated with test statistics W, and it is obtained as follows: p* = # (W* > W)/J. For a
Note that choosing d̂ * - d̂ and not d̂ * - d as the bootstrap test statistic has the
effect of increasing the power of the test (see Hall and Wilson (1991)). Beran (1988) and
Fisher and Hall (1990) suggest that a bootstrap test should be based on pivotal test statistics
because it improves the accuracy of the test. However since the standard deviation of d̂ *
cannot be obtained from the sample, a pivotal test is not possible in this case.
4 Simulation Studies
Time series of lengths T =100 and 300 were generated from the ARFIMA(p,d,q) processes,
using the method suggested by Hosking (1981). The white noise process is assumed to be
normally distributed with mean 0 and standard deviation 1. With the parameter d ˛ [-0.45,
9
0.45] in increments of 0.15, the series were generated from ARFIMA(1,d,0), with f = 0, 0.1,
0.3, 0.5, 0.7, ARIFMA(0,d,1) with q = 0, 0.1, 0.3, 0.5, 0.7, and ARIFMA(1,d,1) with f = -0.6,
q=0.3. These models were chosen, so that both first and second order processes as well as a
range of parameter values would be considered. Block sizes of the order b = Tk , 0.3 < k < 0.8
in increments of 0.05 were considered. This range of k values was selected to ensure that the
blocks did not contain too few observations or too many observations.
using the Parzen window. The number of regression terms i in Equation (2.10 ) was chosen to
be Tg, where g = 0.5 while the truncation point in the Parzen lag window was chosen to be
Tm, where m = 0.7. (Chen et al. (1993) and Reisen (1994) showed that these values of g and m
produced good estimates of d in their simulation studies.) Other values of g and m in the
range (0,1) where also trialed, but were found to produce poor estimates of d.
A total of 500 moving blocks bootstrap replications were generated for each Monte
Carlo trial which was repeated 1000 times for each case. All programming was done in
Gauss.
4.2 Discussion
For both T = 100 and 300 with block sizes b = Tk , 0.3 < k < 0.6, the size of the test was
mostly underestimated. However for T = 100 with block size b = T0.6 = 16, for series
generated from the processes AR(1) , f = 0.5 and MA(1), q = 0.5, the size estimates were
fairly close to the nominal levels of significance. For block sizes b = T0.65 = 20 and b = T0.7 =
25, the size estimates were fairly close to the nominal levels of significance for series
generated from all other selected processes, except for AR(1) , f = 0.5, 0.7 and MA(1), q =
0.5, 0.7 for which it was overestimated. For T = 300, with block sizes b = T0.65 = 41 and b =
T0.7 = 54, the size estimates were fairly close to the nominal levels of significance for series
generated from all other selected processes except for AR(1) , f = 0.7 and MA(1), q = 0.7.
For these processes, size was overestimated. For both T = 100 and 300 with block size
10
b = Tk, 0.7 < k < 0.8, the size of the test was considerably overestimated in all cases. Some of
Power estimates for both T = 100 and 300, were obtained for block sizes b = T0.65 and
T0.7, which had produced reasonably good size estimates. These power estimates for b = T0.65
are given in Tables 3 and 4. For T = 100, it can be seen that the test has fairly good power as d
approaches 0.45. However as d approaches -0.45, the increase in power is not as good,
especially for series generated from the AR(1), f = 0.5 and the ARMA processes. Similar
For T = 300 with b = T0.65, it can be seen that the test has fairly good power as d
approaches both 0.45 and -0.45 except when the series are generated from the ARMA process
with d approaching –0.45. Similar observations were made for T = 300 with b = T0.7.
Overall then, it appears that our test performs reasonably well for block sizes b = T0.65
and T0.7. However as the parameter values of the autoregressive and moving average
processes from which the series are generated, tend to the boundary value of 1 (that is, the
series approach non-stationarity), the test tends to become invalid. However it takes longer
for this to happen for T =300 than for T = 100. As expected the power of the test improves
with increasing series length. It would appear then from this simulation study that the optimal
block size for the purposes of testing for d, is in the interval [T0.65 , T0.7].
To compare this test for d, to the asymptotic test for d which is based on the
generated from some of the processes mentioned above for g = 0.5 and for m = 0.7 and 0.9.
Reinsen (1993) showed that for m = 0.9 and g = 0.5, this test has fairly good power. However
our results in Table 5, clearly show that this test is not valid since its size is considerably
overestimated. Since our test is generally valid for certain block sizes and since it has
11
reasonably good power for these block sizes, it would appear to be more reliable than this
asymptotic test.
<Table 5>
5 Concluding Remarks
Overall this method of testing for d using the moving blocks bootstrap appears to produce
fairly good results for certain block sizes. That is, in most cases, for these block sizes, the test
is generally valid with reasonably good power. We believe that this test has an advantage over
the other tests in the literature because it is free of any of the restrictions that are imposed on
these other tests, and it can adequately differentiate between ARFIMA (p,d,q) and
ARFIMA(p,0,q) models.
Acknowledgments
This work was supported by a grant from the Monash University Research Fund. Thanks to
Mizanur Laskar for the programming assistance and to John Nankervis for the useful
References
Agiakloglou, C., Newbold, P. and Wohar, M. (1993) Bias in an estimator of the fractional
Agiakloglou, C. and Newbold, P. (1994) Langrange multiplier tests for fractional difference.
Beran, R. (1988) Prepivoting test statistics: A bootstrap view and asymptotic refinements.
12
Bickel, P. J. and Freedman, D. A. (1981) Some asymmetric theory for the bootstrap. Annals
of Statistics, 9, 1196-1217.
Box G. E. and Jenkins, G. M. (1970) Time series analysis, forecasting and control. San
Chen, G., Abraham, B. and Peiris, S. (1993) Lag window estimation of the degree of
Davies, R. B. and Harte, D. S. (1987) Tests for Hurst effect. Biometrika 1, 95-101.
Efron, B. (1979) Bootstrap methods: Another look at the jackknife. Annals of Statistics 7,
1-26.
Statistics, 1977-1990.
Geweke, J. and Porter-Hudak, S. (1983) The estimation and application of long memory time
Granger, C. W. J. (1978) New classes of time series models. Statistician 27, 237-253.
Granger, C. W. J. and Joyeux, R. (1980) An introduction to long memory time series models
Hall, P., Horowitz, J. L. and Jing, B. (1995) On blocking rules for the bootstrap with
Hall, P. and Wilson, S. R. (1991) Two guidelines for bootstrap hypothesis testing.
Hassler, U. (1993) Regression of spectral estimators with fractionally integrated time series.
Hinkley, D. V. (1988) Bootstrap methods. Journal of the Royal Statistical Society. 50,
321-337.
13
Hosking, J. R. M. (1981) Fractional differencing. Biometrika 68, 165-76.
Kunsch, H. R. (1989) The jacknife and bootstrap for general stationary observations. Annals
Kwiatkowski, D., Phillips, P. C. B., Schmidt, P. and Shin, Y. (1992), Testing the null
hypothesis of stationarity against the alternative of a unit root: How sure are we that
economic time series have a unit root? Journal of Econometrics 54, 159-178.
Liu, R. Y. and K. Singh (1988) Exploring the Limits of the Bootstrap. (Editor: R. Lepage and
Research 7, 543-553.
Politis, D. N., Romano, J. P. and Wolf, M. (1997) Subsampling from heteroskedastic time
model using the smoothed periodogram. Journal of Time Series Analysis 15, 335-350.
Romano, J. (1988) A bootstrap revival and some nonparametric distance tests. Journal of the
1187-1195.
14
Table 1 Size Estimates for H0: d = 0, H1 d „ 0, for T = 100
AR(1) MA(1) ARMA(1,1)
Block size Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
k Tk Level
0.60 16 10% 0.073 0.075 0.083 0.138 0.073 0.073 0.130 0.049
5% 0.025 0.025 0.032 0.069 0.025 0.031 0.053 0.013
1% 0.003 0.003 0.004 0.011 0.002 0.001 0.007 0.000
0.65 20 10% 0.122 0.124 0.132 0.190 0.120 0.127 0.191 0.095
5% 0.060 0.062 0.067 0.114 0.062 0.068 0.109 0.042
1% 0.012 0.011 0.014 0.030 0.013 0.150 0.026 0.005
0.70 25 10% 0.136 0.141 0.144 0.201 0.139 0.143 0.226 0.118
5% 0.079 0.080 0.080 0.136 0.084 0.088 0.132 0.052
1% 0.019 0.022 0.027 0.049 0.020 0.020 0.039 0.006
1
Table 3 Power Estimates for H0: d = 0, H1 d „ 0, for T = 100, Block size T0.65 = 20
AR(1) MA(1) ARMA(1,1)
d Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
Level
-0.45 10% 0.706 0.692 0.609 0.434 0.720 0.773 0.861 0.539
-0.30 0.432 0.422 0.354 0.233 0.454 0.519 0.689 0.396
-0.15 0.224 0.210 0.175 0.124 0.237 0.277 0.412 0.228
0.00 0.122 0.124 0.132 0.190 0.120 0.127 0.191 0.095
0.15 0.255 0.268 0.320 0.458 0.244 0.195 0.145 0.147
0.30 0.603 0.621 0.672 0.783 0.589 0.520 0.392 0.462
0.45 0.884 0.892 0.915 0.947 0.877 0.849 0.791 0.820
-0.45 5% 0.515 0.497 0.438 0.306 0.531 0.579 0.682 0.268
-0.30 0.302 0.289 0.239 0.146 0.311 0.363 0.480 0.188
-0.15 0.134 0.130 0.099 0.058 0.142 0.173 0.273 0.100
0.00 0.060 0.062 0.067 0.114 0.062 0.068 0.109 0.042
0.15 0.175 0.184 0.222 0.342 0.162 0.131 0.073 0.076
0.30 0.478 0.497 0.557 0.672 0.464 0.414 0.290 0.352
0.45 0.823 0.831 0.862 0.908 0.815 0.785 0.683 0.751
-0.45 1% 0.227 0.223 0.188 0.110 0.224 0.218 0.243 0.020
-0.30 0.094 0.088 0.078 0.041 0.098 0.105 0.157 0.013
-0.15 0.038 0.036 0.027 0.011 0.038 0.047 0.068 0.009
0.00 0.012 0.011 0.014 0.030 0.013 0.150 0.026 0.005
0.15 0.066 0.073 0.100 0.160 0.059 0.039 0.013 0.015
0.30 0.298 0.316 0.355 0.435 0.280 0.243 0.161 0.195
0.45 0.662 0.678 0.707 0.772 0.652 0.603 0.522 0.567
2
Table 4 Power Estimates for H0: d = 0, H1 d „ 0, for T = 300, Block size T0.65 = 41
AR(1) MA(1) ARMA(1,1)
d Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
Level
-0.45 10% 0.914 0.914 0.908 0.871 0.914 0.911 0.903 0.557
-0.30 0.684 0.681 0.667 0.560 0.691 0.697 0.735 0.456
-0.15 0.306 0.300 0.279 0.206 0.312 0.336 0.390 0.235
0.00 0.098 0.095 0.096 0.111 0.097 0.095 0.101 0.065
0.15 0.354 0.360 0.394 0.461 0.343 0.312 0.230 0.262
0.30 0.818 0.824 0.846 0.884 0.814 0.793 0.744 0.777
0.45 0.966 0.967 0.969 0.980 0.965 0.965 0.952 0.959
-0.45 5% 0.805 0.810 0.808 0.749 0.795 0.776 0.697 0.209
-0.30 0.527 0.522 0.492 0.408 0.520 0.511 0.494 0.194
-0.15 0.193 0.190 0.169 0.120 0.193 0.201 0.227 0.104
0.00 0.045 0.043 0.042 0.053 0.043 0.045 0.050 0.024
0.15 0.233 0.241 0.278 0.355 0.225 0.197 0.135 0.162
0.30 0.731 0.737 0.758 0.809 0.725 0.694 0.635 0.678
0.45 0.947 0.949 0.951 0.963 0.944 0.939 0.921 0.931
-0.45 1% 0.422 0.454 0.455 0.403 0.399 0.294 0.168 0.008
-0.30 0.194 0.206 0.196 0.148 0.185 0.158 0.112 0.010
-0.15 0.048 0.045 0.040 0.023 0.047 0.046 0.040 0.002
0.00 0.004 0.003 0.005 0.004 0.004 0.005 0.003 0.000
0.15 0.075 0.076 0.091 0.137 0.071 0.057 0.029 0.041
0.30 0.490 0.499 0.529 0.585 0.478 0.439 0.367 0.407
0.45 0.863 0.863 0.879 0.900 0.861 0.846 0.814 0.836
3
Table 5 Size estimates for H0: d = 0, H1 d „ 0, for the asymptotic test
AR(1) MA(1) ARMA(1,1)
Significance 0 0.1 0.3 0.5 0.1 0.3 0.5 -0.6 0.3
T m Level
100 7 10% 0.363 0.365 0.396 0.471 0.359 0.357 0.463 0.368
5% 0.278 0.276 0.290 0.374 0.281 0.287 0.384 0.292
1% 0.163 0.163 0.169 0.240 0.163 0.177 0.238 0.186
9 10% 0.279 0.272 0.275 0.347 0.270 0.270 0.320 0.266
5% 0.188 0.186 0.209 0.258 0.190 0.200 0.241 0.195
1% 0.087 0.087 0.092 0.142 0.088 0.094 0.127 0.097
300 7 10% 0.487 0.490 0.485 0.517 0.491 0.481 0.503 0.487
5% 0.400 0.404 0.411 0.431 0.404 0.414 0.422 0.416
1% 0.272 0.267 0.267 0.297 0.276 0.271 0.300 0.273
9 10% 0.316 0.322 0.306 0.338 0.330 0.316 0.333 0.316
5% 0.219 0.249 0.241 0.244 0.236 0.234 0.242 0.232
1% 0.121 0.121 0.112 0.124 0.123 0.109 0.125 0.114