Content deleted Content added
User357269 (talk | contribs) updated introduction and motivation |
|||
Line 1:
In [[probability theory]], '''regular conditional probability''' is a concept that formalizes the notion conditioning on the outcome of a [[random variable]]. The resulting '''conditional probability distribution''' is a parametrized family of probability measures called a [[Markov kernel]].
==Motivation==
Normally we define the '''conditional probability''' of an event ''A'' given an event ''B'' as:▼
Consider a discrete random variable ''X'' that represents the roll of a die.
:<math>P(A|B)=\frac{P(A\cap B)}{P(B)}.</math>▼
▲
Conditional probability forms a two-variable function <math>\nu:\mathbb{R} \times \mathcal{F} \to \mathbb{R}</math>
:<math>\nu(x, E) = P(E | X =x)</math>
Note that when ''x'' is not a possible outcome of ''X'', the function is undefined: the roll of a die coming up 27 is a probability zero event. The function <math>\nu</math> is defined [[almost everywhere]] in ''x''.
Now consider two continuous random variables, ''X'' and ''Y'', with density <math>f_{X,Y}(x,y)</math>.
The conditional probability of ''Y'' being in a region <math>A \subseteq \mathbb{R}</math> is given by
:<math>P(Y \in A | X = x) = \frac{\int_A f_{X,Y}(x, y) \mathrm{d}y}{\int_\mathbb{R} f_{X,Y}(x, y) \mathrm{d}y}.</math>
Conditional probability is a two variable function as before, undefined outside of the [[support]] of the distribution of ''X''.
==Definition==
|