Conditional probability distribution: Difference between revisions

Content deleted Content added
Change the indexes of the example expression. The expression had presumably been copied from some other place using X_1 and X_2 instead of X and Y so there was a bit of a confusing mismatch where it was not clear whether mu_1 relates to X or Y
m +{{Authority control}} (1 ID from Wikidata); WP:GenFixes & cleanup on
Line 1:
{{Short description|Probability theory and statistics concept}}
{{refimprovemore citations needed|date=April 2013}}
In [[probability theory]] and [[statistics]], given two [[joint probability distribution|jointly distributed]] [[random variable]]s <math>X</math> and <math>Y</math>, the '''conditional probability distribution''' of <math>Y</math> given <math>X</math> is the [[probability distribution]] of <math>Y</math> when <math>X</math> is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value <math>x</math> of <math>X</math> as a parameter. When both <math>X</math> and <math>Y</math> are [[categorical variable]]s, a [[conditional probability table]] is typically used to represent the conditional probability. The conditional distribution contrasts with the [[marginal distribution]] of a random variable, which is its distribution without reference to the value of the other variable.
 
Line 73:
 
==Measure-theoretic formulation==
Let <math>(\Omega, \mathcal{F}, P)</math> be a probability space, <math>\mathcal{G} \subseteq \mathcal{F}</math> a <math>\sigma</math>-field in <math>\mathcal{F}</math>. Given <math>A\in \mathcal{F}</math>, the [[Radon-Nikodym theorem]] implies that there is<ref>[[#billingsley95|Billingsley (1995)]], p. 430</ref> a <math>\mathcal{G}</math>-measurable random variable <math>P(A\mid\mathcal{G}):\Omega\to \mathbb{R}</math>, called the '''conditional probability''', such that<math display="block">\int_G P(A\mid\mathcal{G})(\omega) dP(\omega)=P(A\cap G)</math>for every <math>G\in \mathcal{G}</math>, and such a random variable is uniquely defined up to sets of probability zero. A conditional probability is called [[Regular conditional probability|'''regular''']] if <math> \operatorname{P}(\cdot\mid\mathcal{G})(\omega) </math> is a [[probability measure]] on <math>(\Omega, \mathcal{F})</math> for all <math>\omega \in \Omega</math> a.e.
 
Special cases:
 
* For the trivial sigma algebra <math>\mathcal G= \{\emptyset,\Omega\}</math>, the conditional probability is the constant function <math>\operatorname{P}\!\left( A\mid \{\emptyset,\Omega\} \right) = \operatorname{P}(A).</math>
* If <math>A\in \mathcal{G}</math>, then <math>\operatorname{P}(A\mid\mathcal{G})=1_A</math>, the indicator function (defined below).
Let <math>X : \Omega \to E</math> be a <math>(E, \mathcal{E})</math>-valued random variable. For each <math>B \in \mathcal{E}</math>, define <math display="block">\mu_{X \, | \, \mathcal{G}} (B \, |\, \mathcal{G}) = \mathrm{P} (X^{-1}(B) \, | \, \mathcal{G}).</math>For any <math>\omega \in \Omega</math>, the function <math>\mu_{X \, | \mathcal{G}}(\cdot \, | \mathcal{G}) (\omega) : \mathcal{E} \to \mathbb{R}</math> is called the '''[[Conditional expectation#Definition of conditional probability|conditional probability]] distribution''' of <math>X</math> given <math>\mathcal{G}</math>. If it is a probability measure on <math>(E, \mathcal{E})</math>, then it is called [[Regular conditional probability|'''regular''']].
 
For a real-valued random variable (with respect to the Borel <math>\sigma</math>-field <math>\mathcal{R}^1</math> on <math>\mathbb{R}</math>), every conditional probability distribution is regular.<ref>[[#billingsley95|Billingsley (1995)]], p. 439</ref> In this case,<math>E[X \mid \mathcal{G}] = \int_{-\infty}^\infty x \, \mu(d x, \cdot)</math> almost surely.
 
=== Relation to conditional expectation ===
Line 122:
}}
{{refend}}
 
{{Authority control}}
 
[[Category:Theory of probability distributions]]