Consistent estimator: Difference between revisions
Marcocapelle (talk | contribs) removed grandparent category of Category:Estimator |
Fixed apostrophe Tags: Mobile edit Mobile app edit iOS app edit |
||
(33 intermediate revisions by 27 users not shown) | |||
Line 1: | Line 1: | ||
{{broader|Consistency (statistics)}} |
{{Short description|Statistical estimator converging in probability to a true parameter as sample size increases}}{{broader|Consistency (statistics)}} |
||
[[Image:Consistency of estimator.svg|thumb|250px|{''T''<sub>1</sub>, ''T''<sub>2</sub>, ''T''<sub>3</sub>, |
[[Image:Consistency of estimator.svg|thumb|250px|{''T''<sub>1</sub>, ''T''<sub>2</sub>, ''T''<sub>3</sub>, ...} is a sequence of estimators for parameter ''θ''<sub>0</sub>, the true value of which is 4. This sequence is consistent: the estimators are getting more and more concentrated near the true value ''θ''<sub>0</sub>; at the same time, these estimators are biased. The limiting distribution of the sequence is a degenerate random variable which equals ''θ''<sub>0</sub> with probability 1.]] |
||
In [[statistics]], a '''consistent estimator''' or '''asymptotically consistent estimator''' is an [[estimator]]—a rule for computing estimates of a parameter ''θ''<sub>0</sub>—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates [[convergence in probability|converges in probability]] to ''θ''<sub>0</sub>. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to ''θ''<sub>0</sub> converges to one. |
In [[statistics]], a '''consistent estimator''' or '''asymptotically consistent estimator''' is an [[estimator]]—a rule for computing estimates of a parameter ''θ''<sub>0</sub>—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates [[convergence in probability|converges in probability]] to ''θ''<sub>0</sub>. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to ''θ''<sub>0</sub> converges to one. |
||
Line 7: | Line 7: | ||
In practice one constructs an estimator as a function of an available sample of [[sample size|size]] ''n'', and then imagines being able to keep collecting data and expanding the sample ''ad infinitum''. In this way one would obtain a sequence of estimates indexed by ''n'', and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value ''θ''<sub>0</sub>, it is called a consistent estimator; otherwise the estimator is said to be '''inconsistent'''. |
In practice one constructs an estimator as a function of an available sample of [[sample size|size]] ''n'', and then imagines being able to keep collecting data and expanding the sample ''ad infinitum''. In this way one would obtain a sequence of estimates indexed by ''n'', and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value ''θ''<sub>0</sub>, it is called a consistent estimator; otherwise the estimator is said to be '''inconsistent'''. |
||
Consistency as defined here is sometimes referred to as ''weak consistency''. When we replace convergence in probability with [[almost sure convergence]], then the estimator is said to be ''strongly consistent''. Consistency is related to [[bias of an estimator|bias]]; see [[#Bias versus consistency|bias versus consistency]]. |
Consistency as defined here is sometimes referred to as '''weak consistency'''. When we replace convergence in probability with [[almost sure convergence]], then the estimator is said to be '''strongly consistent'''. Consistency is related to [[bias of an estimator|bias]]; see [[#Bias versus consistency|bias versus consistency]]. |
||
== Definition == |
== Definition == |
||
Formally speaking, an [[estimator]] ''T<sub>n</sub>'' of parameter ''θ'' is said to be '''weakly consistent''', if it [[convergence in probability|'''converges in probability''']] to the true value of the parameter:{{sfn|Amemiya|1985|loc=Definition 3.4.2}} |
|||
: <math> |
: <math> |
||
\underset{n\to\infty}{\operatorname{plim}}\;T_n = \theta. |
\underset{n\to\infty}{\operatorname{plim}}\;T_n = \theta. |
||
</math> |
|||
i.e. if, for all ''ε'' > 0 |
|||
: <math> |
|||
\lim_{n\to\infty}\Pr\big(|T_n-\theta| > \varepsilon\big) = 0. |
|||
</math> |
|||
An [[estimator]] ''T<sub>n</sub>'' of parameter ''θ'' is said to be '''strongly consistent''', if it '''converges almost surely''' to the true value of the parameter: |
|||
: <math> |
|||
\Pr\big(\lim_{n\to\infty}T_n = \theta\big) = 1. |
|||
</math> |
</math> |
||
A more rigorous definition takes into account the fact that ''θ'' is actually unknown, and thus the convergence in probability must take place for every possible value of this parameter. Suppose {{nowrap|{''p<sub>θ</sub>'': ''θ'' ∈ Θ}}} is a family of distributions (the [[parametric model]]), and {{nowrap|1=''X<sup>θ</sup>'' = {''X''<sub>1</sub>, ''X''<sub>2</sub>, … : ''X<sub>i</sub>'' ~ ''p<sub>θ</sub>''}}} is an infinite [[statistical sample|sample]] from the distribution ''p<sub>θ</sub>''. Let { ''T<sub>n</sub>''(''X<sup>θ</sup>'') } be a sequence of estimators for some parameter ''g''(''θ''). Usually ''T<sub>n</sub>'' will be based on the first ''n'' observations of a sample. Then this sequence {''T<sub>n</sub>''} is said to be (weakly) '''consistent''' if {{sfn|Lehman|Casella|1998|page=332}} |
A more rigorous definition takes into account the fact that ''θ'' is actually unknown, and thus, the convergence in probability must take place for every possible value of this parameter. Suppose {{nowrap|{''p<sub>θ</sub>'': ''θ'' ∈ Θ}}} is a family of distributions (the [[parametric model]]), and {{nowrap|1=''X<sup>θ</sup>'' = {''X''<sub>1</sub>, ''X''<sub>2</sub>, … : ''X<sub>i</sub>'' ~ ''p<sub>θ</sub>''}}} is an infinite [[statistical sample|sample]] from the distribution ''p<sub>θ</sub>''. Let { ''T<sub>n</sub>''(''X<sup>θ</sup>'') } be a sequence of estimators for some parameter ''g''(''θ''). Usually, ''T<sub>n</sub>'' will be based on the first ''n'' observations of a sample. Then this sequence {''T<sub>n</sub>''} is said to be (weakly) '''consistent''' if {{sfn|Lehman|Casella|1998|page=332}} |
||
: <math> |
: <math> |
||
\underset{n\to\infty}{\operatorname{plim}}\;T_n(X^{\theta}) = g(\theta),\ \ \text{for all}\ \theta\in\Theta. |
\underset{n\to\infty}{\operatorname{plim}}\;T_n(X^{\theta}) = g(\theta),\ \ \text{for all}\ \theta\in\Theta. |
||
</math> |
</math> |
||
This definition uses ''g''(''θ'') instead of simply ''θ'', because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example we estimate the location parameter of the model, but not the scale: |
This definition uses ''g''(''θ'') instead of simply ''θ'', because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example, we estimate the location parameter of the model, but not the scale: |
||
== Examples == |
== Examples == |
||
=== Sample mean of a normal random variable === |
=== Sample mean of a normal random variable === |
||
Suppose one has a sequence of observations {''X''<sub>1</sub>, ''X''<sub>2</sub>, |
Suppose one has a sequence of [[Independence (probability theory)|statistically independent]] observations {''X''<sub>1</sub>, ''X''<sub>2</sub>, ...} from a [[Normal distribution|normal ''N''(''μ'', ''σ''<sup>2</sup>)]] distribution. To estimate ''μ'' based on the first ''n'' observations, one can use the [[sample mean]]: ''T<sub>n</sub>'' = (''X''<sub>1</sub> + ... + ''X<sub>n</sub>'')/''n''. This defines a sequence of estimators, indexed by the sample size ''n''. |
||
From the properties of the normal distribution, we know the [[sampling distribution]] of this statistic: ''T''<sub>''n''</sub> is itself normally distributed, with mean ''μ'' and variance ''σ''<sup>2</sup>/''n''. Equivalently, <math style="vertical-align:-.3em">\scriptstyle (T_n-\mu)/(\sigma/\sqrt{n})</math> has a standard normal distribution: |
From the properties of the normal distribution, we know the [[sampling distribution]] of this statistic: ''T''<sub>''n''</sub> is itself normally distributed, with mean ''μ'' and variance ''σ''<sup>2</sup>/''n''. Equivalently, <math style="vertical-align:-.3em">\scriptstyle (T_n-\mu)/(\sigma/\sqrt{n})</math> has a standard normal distribution: |
||
Line 33: | Line 42: | ||
2\left(1-\Phi\left(\frac{\sqrt{n}\,\varepsilon}{\sigma}\right)\right) \to 0 |
2\left(1-\Phi\left(\frac{\sqrt{n}\,\varepsilon}{\sigma}\right)\right) \to 0 |
||
</math> |
</math> |
||
as ''n'' tends to infinity, for any fixed {{nowrap|''ε'' > 0}}. Therefore, the sequence ''T<sub>n</sub>'' of sample means is consistent for the population mean ''μ''. |
as ''n'' tends to infinity, for any fixed {{nowrap|''ε'' > 0}}. Therefore, the sequence ''T<sub>n</sub>'' of sample means is consistent for the population mean ''μ'' (recalling that <math>\Phi</math> is the [[Cumulative distribution function|cumulative distribution]] of the normal distribution). |
||
== Establishing consistency == |
== Establishing consistency == |
||
Line 41: | Line 50: | ||
* In order to demonstrate consistency directly from the definition one can use the inequality {{sfn|Amemiya|1985|loc=equation (3.2.5)}} |
* In order to demonstrate consistency directly from the definition one can use the inequality {{sfn|Amemiya|1985|loc=equation (3.2.5)}} |
||
:: <math> |
:: <math> |
||
\Pr\!\big[h(T_n-\theta)\geq\varepsilon\big] \leq \frac{\operatorname{E}\big[h(T_n-\theta)\big]}{\varepsilon}, |
\Pr\!\big[h(T_n-\theta)\geq\varepsilon\big] \leq \frac{\operatorname{E}\big[h(T_n-\theta)\big]}{h(\varepsilon)}, |
||
</math> |
</math> |
||
the most common choice for function ''h'' being either the absolute value (in which case it is known as [[Markov inequality]]), or the quadratic function (respectively [[Chebyshev's inequality]]). |
the most common choice for function ''h'' being either the absolute value (in which case it is known as [[Markov inequality]]), or the quadratic function (respectively [[Chebyshev's inequality]]). |
||
Line 50: | Line 59: | ||
</math> |
</math> |
||
* [[ |
* [[Slutsky's theorem]] can be used to combine several different estimators, or an estimator with a non-random convergent sequence. If ''T<sub>n</sub>'' →<sup style="position:relative;top:-.2em;left:-1em;">''d''</sup>''α'', and ''S<sub>n</sub>'' →<sup style="position:relative;top:-.2em;left:-1em;">''p''</sup>''β'', then {{sfn|Amemiya|1985|loc=Theorem 3.2.7}} |
||
:: <math>\begin{align} |
:: <math>\begin{align} |
||
& T_n + S_n \ \xrightarrow{d}\ \alpha+\beta, \\ |
& T_n + S_n \ \xrightarrow{d}\ \alpha+\beta, \\ |
||
Line 63: | Line 72: | ||
== Bias versus consistency == |
== Bias versus consistency == |
||
[[Bias of an estimator|Bias]] is related to consistency as follows: a sequence of estimators is consistent [[if and only if]] it converges to a value and the bias converges to zero. Consistent estimators are convergent and ''asymptotically'' unbiased (hence converge to the correct value): individual estimators in the sequence may be biased, but the overall sequence still consistent, if the bias converges to zero. Conversely, if the sequence does not converge to a value, then it is not consistent, regardless of whether the estimators in the sequence are biased or not. |
|||
=== Unbiased but not consistent === |
=== Unbiased but not consistent === |
||
An estimator can be [[biased estimator|unbiased]] but not consistent. For example, for an [[iid]] sample {''x''{{su|b=1}},..., ''x{{su|b=n}}''} one can use ''T''(''X'') = ''x''{{su|b= |
An estimator can be [[biased estimator|unbiased]] but not consistent. For example, for an [[iid]] sample {''x''{{su|b=1}},..., ''x{{su|b=n}}''} one can use ''T{{su|b=n}}''(''X'') = ''x''{{su|b=n}} as the estimator of the mean E[''X'']. Note that here the sampling distribution of ''T{{su|b=n}}'' is the same as the underlying distribution (for any ''n,'' as it ignores all points but the last), so E[''T{{su|b=n}}''(''X'')] = E[''X''] and it is unbiased, but it does not converge to any value. |
||
However, if a sequence of estimators is unbiased ''and'' converges to a value, then it is consistent, as it must converge to the correct value. |
However, if a sequence of estimators is unbiased ''and'' converges to a value, then it is consistent, as it must converge to the correct value. |
||
=== Biased but consistent === |
=== Biased but consistent === |
||
Alternatively, an estimator can be biased but consistent. For example if the mean is estimated by <math>{1 \over n} \sum x_i + {1 \over n}</math> it is biased, but as <math>n \rightarrow \infty</math>, it approaches the correct value, and so it is consistent. |
Alternatively, an estimator can be biased but consistent. For example, if the mean is estimated by <math>{1 \over n} \sum x_i + {1 \over n}</math> it is biased, but as <math>n \rightarrow \infty</math>, it approaches the correct value, and so it is consistent. |
||
⚫ | Important examples include the [[sample variance]] and [[sample standard deviation]]. Without [[Bessel's correction]] (that is, when using the sample size <math>n</math> instead of the [[Degrees of freedom (statistics)|degrees of freedom]] <math>n-1</math>), these are both negatively biased but consistent estimators. With the correction, the corrected sample variance is unbiased, while the corrected sample standard deviation is still biased, but less so, and both are still consistent: the correction factor converges to 1 as sample size grows. |
||
Here is another example. Let <math>T_n</math> be a sequence of estimators for <math>\theta</math>. |
|||
:<math>\Pr(T_n) = \begin{cases} |
|||
1 - 1/n, & \mbox{if }\, T_n = \theta \\ |
|||
1/n, & \mbox{if }\, T_n = n\delta + \theta |
|||
\end{cases}</math> |
|||
We can see that <math>T_n \xrightarrow{p} \theta</math>, <math>\operatorname{E}[T_n] = \theta + \delta </math>, and the bias does not converge to zero. |
|||
⚫ | Important examples include the [[sample variance]] and [[sample standard deviation]]. Without [[Bessel's correction]] (using the sample size |
||
== See also == |
== See also == |
||
Line 80: | Line 97: | ||
* [[Regression dilution]] |
* [[Regression dilution]] |
||
* [[Statistical hypothesis testing]] |
* [[Statistical hypothesis testing]] |
||
* [[Instrumental variables estimation]] |
|||
== Notes == |
== Notes == |
||
Line 86: | Line 104: | ||
== References == |
== References == |
||
* {{cite book |
* {{cite book |
||
| last = Amemiya | first = Takeshi | authorlink = Takeshi Amemiya |
| last = Amemiya |
||
| first = Takeshi |
|||
| authorlink = Takeshi Amemiya |
|||
| title = Advanced |
| title = Advanced Econometrics |
||
| year = 1985 |
| year = 1985 |
||
| publisher = Harvard University Press |
| publisher = [[Harvard University Press]] |
||
| isbn = 0-674-00560-0 |
| isbn = 0-674-00560-0 |
||
| url-access = registration |
|||
| ref = CITEREFAmemiya1985 |
|||
| url = https://archive.org/details/advancedeconomet00amem |
|||
}} |
}} |
||
* {{cite book |
* {{cite book |
||
| |
| author1-last = Lehmann | author1-first = E. L. | author1-link= Erich Leo Lehmann |
||
| |
| author2-last = Casella | author2-first = G. | author2-link= George Casella |
||
| title = Theory of Point Estimation |
| title = Theory of Point Estimation |
||
| year = 1998 |
| year = 1998 |
||
Line 104: | Line 125: | ||
}} |
}} |
||
* {{cite book |
* {{cite book |
||
| last1 = Newey | first1 = W. |
| last1 = Newey | first1 = W. K. |
||
| last2 = McFadden | first2 = D. | authorlink2 = Daniel McFadden |
| last2 = McFadden | first2 = D. | s2cid = 29436457 |
||
| authorlink2 = Daniel McFadden |
|||
| |
| chapter = Chapter 36: Large sample estimation and hypothesis testing |
||
| year = 1994 |
| year = 1994 |
||
| title = Handbook of Econometrics |
|||
| series = In “Handbook of Econometrics”, Vol. 4, Ch. 36 |
|||
| volume = 4 |
|||
|editor= Robert F. Engle |editor2=Daniel L. McFadden |
|||
| publisher = Elsevier Science |
| publisher = Elsevier Science |
||
| isbn = 0-444-88766-0 |
| isbn = 0-444-88766-0 |
||
| ref = CITEREFNeweyMcFadden1994 |
|||
}} |
}} |
||
* {{SpringerEOM| title=Consistent estimator |id=C/c025240 |first=M.S. |last=Nikulin}} |
* {{SpringerEOM| title=Consistent estimator |id=C/c025240 |first=M. S. |last=Nikulin}} |
||
*{{citation | last= Sober | first= E. | author-link= Elliott Sober | title= Likelihood and convergence | journal= [[Philosophy of Science]] | year= 1988 | volume= 55 | issue= 2 | pages= 228–237 | doi= 10.1086/289429}}. |
|||
== External links == |
== External links == |
||
Line 120: | Line 144: | ||
{{DEFAULTSORT:Consistent estimator}} |
{{DEFAULTSORT:Consistent estimator}} |
||
[[Category:Estimator]] |
[[Category:Estimator]] |
||
[[Category:Asymptotic |
[[Category:Asymptotic theory (statistics)]] |
Latest revision as of 15:39, 23 December 2023
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.
In practice one constructs an estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent.
Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent. Consistency is related to bias; see bias versus consistency.
Definition
[edit]Formally speaking, an estimator Tn of parameter θ is said to be weakly consistent, if it converges in probability to the true value of the parameter:[1]
i.e. if, for all ε > 0
An estimator Tn of parameter θ is said to be strongly consistent, if it converges almost surely to the true value of the parameter:
A more rigorous definition takes into account the fact that θ is actually unknown, and thus, the convergence in probability must take place for every possible value of this parameter. Suppose {pθ: θ ∈ Θ} is a family of distributions (the parametric model), and Xθ = {X1, X2, … : Xi ~ pθ} is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually, Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if [2]
This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example, we estimate the location parameter of the model, but not the scale:
Examples
[edit]Sample mean of a normal random variable
[edit]Suppose one has a sequence of statistically independent observations {X1, X2, ...} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, one can use the sample mean: Tn = (X1 + ... + Xn)/n. This defines a sequence of estimators, indexed by the sample size n.
From the properties of the normal distribution, we know the sampling distribution of this statistic: Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, has a standard normal distribution:
as n tends to infinity, for any fixed ε > 0. Therefore, the sequence Tn of sample means is consistent for the population mean μ (recalling that is the cumulative distribution of the normal distribution).
Establishing consistency
[edit]The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist:
- In order to demonstrate consistency directly from the definition one can use the inequality [3]
the most common choice for function h being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebyshev's inequality).
- Another useful result is the continuous mapping theorem: if Tn is consistent for θ and g(·) is a real-valued function continuous at point θ, then g(Tn) will be consistent for g(θ):[4]
- Slutsky's theorem can be used to combine several different estimators, or an estimator with a non-random convergent sequence. If Tn →dα, and Sn →pβ, then [5]
- If estimator Tn is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the law of large numbers can be used: for a sequence {Xn} of random variables and under suitable conditions,
- If estimator Tn is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator), then a more complicated argument involving stochastic equicontinuity has to be used.[6]
Bias versus consistency
[edit]Unbiased but not consistent
[edit]An estimator can be unbiased but not consistent. For example, for an iid sample {x
1,..., x
n} one can use T
n(X) = x
n as the estimator of the mean E[X]. Note that here the sampling distribution of T
n is the same as the underlying distribution (for any n, as it ignores all points but the last), so E[T
n(X)] = E[X] and it is unbiased, but it does not converge to any value.
However, if a sequence of estimators is unbiased and converges to a value, then it is consistent, as it must converge to the correct value.
Biased but consistent
[edit]Alternatively, an estimator can be biased but consistent. For example, if the mean is estimated by it is biased, but as , it approaches the correct value, and so it is consistent.
Important examples include the sample variance and sample standard deviation. Without Bessel's correction (that is, when using the sample size instead of the degrees of freedom ), these are both negatively biased but consistent estimators. With the correction, the corrected sample variance is unbiased, while the corrected sample standard deviation is still biased, but less so, and both are still consistent: the correction factor converges to 1 as sample size grows.
Here is another example. Let be a sequence of estimators for .
We can see that , , and the bias does not converge to zero.
See also
[edit]- Efficient estimator
- Fisher consistency — alternative, although rarely used concept of consistency for the estimators
- Regression dilution
- Statistical hypothesis testing
- Instrumental variables estimation
Notes
[edit]- ^ Amemiya 1985, Definition 3.4.2.
- ^ Lehman & Casella 1998, p. 332.
- ^ Amemiya 1985, equation (3.2.5).
- ^ Amemiya 1985, Theorem 3.2.6.
- ^ Amemiya 1985, Theorem 3.2.7.
- ^ Newey & McFadden 1994, Chapter 2.
References
[edit]- Amemiya, Takeshi (1985). Advanced Econometrics. Harvard University Press. ISBN 0-674-00560-0.
- Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation (2nd ed.). Springer. ISBN 0-387-98502-6.
- Newey, W. K.; McFadden, D. (1994). "Chapter 36: Large sample estimation and hypothesis testing". In Robert F. Engle; Daniel L. McFadden (eds.). Handbook of Econometrics. Vol. 4. Elsevier Science. ISBN 0-444-88766-0. S2CID 29436457.
- Nikulin, M. S. (2001) [1994], "Consistent estimator", Encyclopedia of Mathematics, EMS Press
- Sober, E. (1988), "Likelihood and convergence", Philosophy of Science, 55 (2): 228–237, doi:10.1086/289429.