Continuous mapping theorem: Difference between revisions

Content deleted Content added
 
(3 intermediate revisions by 2 users not shown)
Line 1:
{{Short description|Probability theorem}}
{{Distinguish|text=the [[contraction mapping theorem]]}}
In [[probability theory]], the '''continuous mapping theorem''' states that continuous functions [[Continuous function#Heine definition of continuity|preserve limits]] even if their arguments are sequences of random variables. A continuous function, in [[Continuous function#Heine definition of continuity|Heine's definition]], is such a function that maps convergent sequences into convergent sequences: if ''x<sub>n</sub>'' → ''x'' then ''g''(''x<sub>n</sub>'') → ''g''(''x''). The ''continuous mapping theorem'' states that this will also be true if we replace the deterministic sequence {''x<sub>n</sub>''} with a sequence of random variables {''X<sub>n</sub>''}, and replace the standard notion of convergence of real numbers “→” with one of the types of [[convergence of random variables]].
Line 25 ⟶ 26:
: <math> \mathbb E f(X_n) \to \mathbb E f(X)</math> for every bounded continuous functional ''f''.
 
So it suffices to prove that <math> \mathbb E f(g(X_n)) \to \mathbb E f(g(X))</math> for every bounded continuous functional ''f''. For simplicity we assume ''g'' continuous. Note that <math> F = f \circ g</math> is itself a bounded continuous functional. And so the claim follows from the statement above. The general case is slightly more technical.
 
===Convergence in probability===
Line 69 ⟶ 70:
{{reflist}}
 
[[Category:ProbabilityTheorems theoremsin probability theory]]
[[Category:Theorems in statistics]]
[[Category:Articles containing proofs]]