Consistent estimator: Difference between revisions

Content deleted Content added
Fixed apostrophe
Tags: Mobile edit Mobile app edit iOS app edit
 
(2 intermediate revisions by the same user not shown)
Line 42:
2\left(1-\Phi\left(\frac{\sqrt{n}\,\varepsilon}{\sigma}\right)\right) \to 0
</math>
as ''n'' tends to infinity, for any fixed {{nowrap|''ε'' > 0}}. Therefore, the sequence ''T<sub>n</sub>'' of sample means is consistent for the population mean&nbsp;''μ'' (recalling that <math>\Phi</math> is the [[Cumulative distribution function|cumulative distribution]] of the standard normal distribution).
 
== Establishing consistency ==
Line 54:
the most common choice for function ''h'' being either the absolute value (in which case it is known as [[Markov inequality]]), or the quadratic function (respectively [[Chebyshev's inequality]]).
 
* Another useful result is the [[continuous mapping theorem]]: if ''T<sub>n</sub>'' is consistent for ''θ'' and ''g''(·) is a real-valued function continuous at the point ''θ'', then ''g''(''T<sub>n</sub>'') will be consistent for ''g''(''θ''):{{sfn|Amemiya|1985|loc=Theorem 3.2.6}}
:: <math>
T_n\ \xrightarrow{p}\ \theta\ \quad\Rightarrow\quad g(T_n)\ \xrightarrow{p}\ g(\theta)
Line 74:
 
=== Unbiased but not consistent ===
An estimator can be [[biased estimator|unbiased]] but not consistent. For example, for an [[iid]] sample {''x''{{su|b=1}},..., ''x{{su|b=n}}''} one can use ''T{{su|b=n}}''(''X'') = ''x''{{su|b=n}} as the estimator of the mean E[''X'']. Note that here the sampling distribution of ''T{{su|b=n}}'' is the same as the underlying distribution (for any ''n,'' as it ignores all points but the last),. soSo E[''T{{su|b=n}}''(''X'')] = E[''X''] andfor any ''n,'' hence it is unbiased, but it does not converge to any value.
 
However, if a sequence of estimators is unbiased ''and'' converges to a value, then it is consistent, as it must converge to the correct value.