Content deleted Content added
Fgnievinski (talk | contribs) No edit summary |
Consistency does not imply asymptotic unbiasedness. See http://www.cs.elte.hu/~elekpeti/week3_2013_notes.pdf |
||
Line 7:
In practice one constructs an estimator as a function of an available sample of [[sample size|size]] ''n'', and then imagines being able to keep collecting data and expanding the sample ''ad infinitum''. In this way one would obtain a sequence of estimates indexed by ''n'', and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value ''θ''<sub>0</sub>, it is called a consistent estimator; otherwise the estimator is said to be '''inconsistent'''.
Consistency as defined here is sometimes referred to as ''weak consistency''. When we replace convergence in probability with [[almost sure convergence]], then the estimator is said to be ''strongly consistent''. Consistency is related to [[bias of an estimator|bias]]
== Definition ==
|