Content deleted Content added
m Open access bot: url-access updated in citation with #oabot. |
|||
(8 intermediate revisions by 3 users not shown) | |||
Line 14:
[[File:Probability tree diagram.svg|thumb|On a [[Tree diagram (probability theory)|tree diagram]], branch probabilities are conditional on the event associated with the parent node. (Here, the overbars indicate that the event does not occur.)]]
[[File:Venn Pie Chart describing Bayes' law.png|thumb|Venn
=== Conditioning on an event ===
Line 20:
Given two [[event (probability theory)|events]] {{mvar|A}} and {{mvar|B}} from the [[sigma-field]] of a probability space, with the [[marginal probability|unconditional probability]] of {{mvar|B}} being greater than zero (i.e., {{math|P(''B'') > 0)}}, the conditional probability of {{mvar|A}} given {{mvar|B}} (<math>P(A \mid B)</math>) is the probability of ''A'' occurring if ''B'' has or is assumed to have happened.<ref name=":1">{{Cite book|last=Reichl|first=Linda Elizabeth|title=A Modern Course in Statistical Physics|publisher=WILEY-VCH|year=2016|isbn=978-3-527-69049-7|edition=4th revised and updated|chapter=2.3 Probability}}</ref> ''A'' is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the [[quotient]] of the probability of the joint intersection of events {{mvar|A}} and {{mvar|B}}, that is, <math>P(A \cap B)</math>, the probability at which ''A'' and ''B'' occur together, and the [[probability]] of {{mvar|B}}:<ref name=":0" /><ref>{{citation|last=Kolmogorov|first=Andrey|title=Foundations of the Theory of Probability|publisher=Chelsea|year=1956 }}</ref><ref>{{Cite web|title=Conditional Probability|url=http://www.stat.yale.edu/Courses/1997-98/101/condprob.htm|access-date=2020-09-11|website=www.stat.yale.edu}}</ref>
:<math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}. </math>
For a sample space consisting of equal likelihood outcomes, the probability of the event ''A'' is understood as the fraction of the number of outcomes in ''A'' to the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the set <math>A \cap B</math> to the set ''B''. Note that the above equation is a definition, not just a theoretical result. We denote the quantity <math>\frac{P(A \cap B)}{P(B)}</math> as <math>P(A\mid B)</math> and call it the "conditional probability of {{mvar|A}} given {{mvar|B}}."
Line 27:
Some authors, such as [[Bruno de Finetti|de Finetti]], prefer to introduce conditional probability as an [[Probability axioms|axiom of probability]]:
:<math>P(A \cap B) = P(A \mid B)P(B). </math>
This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of ''B'' occurring multiplied by the probability of ''A'' occurring, provided that ''B'' has occurred, is equal to the probability of the ''A'' and ''B'' occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major [[probability interpretations]], such as the [[Subjective probability|subjective theory]], conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of <math>A \cap B</math> and introduces a symmetry with the summation axiom for Poincaré Formula:
Line 33:
:<math>P(A \cup B) = P(A) + P(B) - P(A \cap B)</math>
:Thus the equations can be combined to find a new representation of the :
:<math> P(A \cap B)= P(A) + P(B) - P(A \cup B) = P(A \mid B)P(B) </math>
:<math> P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}} </math>▼
▲:<math> P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}}
==== As the probability of a conditional event ====
Line 43 ⟶ 40:
Conditional probability can be defined as the probability of a conditional event <math>A_B</math>. The [[Goodman–Nguyen–Van Fraassen algebra|Goodman–Nguyen–Van Fraassen]] conditional event can be defined as:
:<math>A_B = \bigcup_{i \ge 1} \left( \bigcap_{j<i} \overline{B}_j, A_i B_i \right), </math> where <math>A_i </math> and <math>B_i </math> represent states or elements of ''A'' or ''B.'' <ref>{{Cite journal|last1=Flaminio|first1=Tommaso|last2=Godo|first2=Lluis|last3=Hosni|first3=Hykel|date=2020-09-01|title=Boolean algebras of conditionals, probability and logic|url=https://www.sciencedirect.com/science/article/pii/S000437022030103X|journal=Artificial Intelligence|language=en|volume=286|
▲</math> represent states or elements of ''A'' or ''B.'' <ref>{{Cite journal|last1=Flaminio|first1=Tommaso|last2=Godo|first2=Lluis|last3=Hosni|first3=Hykel|date=2020-09-01|title=Boolean algebras of conditionals, probability and logic|url=https://www.sciencedirect.com/science/article/pii/S000437022030103X|journal=Artificial Intelligence|language=en|volume=286|pages=103347|doi=10.1016/j.artint.2020.103347|arxiv=2006.04673|s2cid=214584872 |issn=0004-3702}}</ref>
It can be shown that
Line 58 ⟶ 46:
:<math>P(A_B)= \frac{P(A \cap B)}{P(B)}</math>
which meets the Kolmogorov definition of conditional probability.<ref>{{Citation|last=Van Fraassen|first=Bas C.|title=Probabilities of Conditionals|date=1976|url=https://doi.org/10.1007/978-94-010-1853-1_10|work=Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science: Volume I Foundations and Philosophy of Epistemic Applications of Probability Theory|pages=261–308|editor-last=Harper|editor-first=William L.|series=The University of Western Ontario Series in Philosophy of Science|place=Dordrecht|publisher=Springer Netherlands|language=en|doi=10.1007/978-94-010-1853-1_10|isbn=978-94-010-1853-1|access-date=2021-12-04|editor2-last=Hooker|editor2-first=Clifford Alan|url-access=subscription}}</ref>
=== Conditioning on an event of probability zero ===
Line 65 ⟶ 53:
The case of greatest interest is that of a random variable {{mvar|Y}}, conditioned on a continuous random variable {{mvar|X}} resulting in a particular outcome {{mvar|x}}. The event <math>B = \{ X = x \}</math> has probability zero and, as such, cannot be conditioned on.
Instead of conditioning on {{mvar|X}} being ''exactly'' {{mvar|x}}, we could condition on it being closer than distance <math>\
We can then take the [[limit (mathematics)|limit]]
{{NumBlk|::|<math>\lim_{\
For example, if two continuous random variables {{mvar|X}} and {{mvar|Y}} have a joint density <math>f_{X,Y}(x,y)</math>, then by [[L'Hôpital's rule]] and [[Leibniz integral rule]], upon differentiation with respect to <math>\
:<math>
\begin{aligned}
\lim_{\
\lim_{\
&= \frac{\int_U f_{X, Y}(x_0, y) \, \mathrm{d}y}{\int_\mathbb{R} f_{X, Y}(x_0, y) \, \mathrm{d}y}.
\end{aligned}
</math>
Line 80 ⟶ 68:
It is tempting to ''define'' the undefined probability <math>P(A \mid X=x)</math> using limit ({{EquationNote|1}}), but this cannot be done in a consistent manner. In particular, it is possible to find random variables {{mvar|X}} and {{mvar|W}} and values {{mvar|x}}, {{mvar|w}} such that the events <math>\{X = x\}</math> and <math>\{W = w\}</math> are identical but the resulting limits are not:
:<math>\lim_{\
The [[Borel–Kolmogorov paradox]] demonstrates this with a geometrical argument.
Line 122 ⟶ 110:
where <math> b_i n \in \mathbb{N}</math><ref name=Draheim2017b />
[[Radical probabilism|Jeffrey conditionalization]]<ref>{{citation|first=Richard C.|last=Jeffrey|title=The Logic of Decision
is a special case of partial conditional probability, in which the condition events must form a [[Partition of a set|partition]]:
Line 266 ⟶ 254:
=== Example ===
When [[Morse code]] is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by: <math>P(\text{dot sent } | \text{ dot received}) = P(\text{dot received } | \text{ dot sent}) \frac{P(\text{dot sent})}{P(\text{dot received})}.</math> In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the
: <math>P(\text{dot received}) = P(\text{dot received
: <math>P(\text{dot received}) = P(\text{dot received
: <math>P(\text{dot received}) = \frac{9}{10}\times\frac{3}{7} + \frac{1}{10}\times\frac{4}{7} = \frac{31}{70}</math>
Now, <math>P(\text{dot sent
: <math>P(\text{dot sent
== Statistical independence ==
Line 293 ⟶ 281:
:<math>P(B\mid A) = P(B)</math>
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in ''A'' and ''B''. Independence does not refer to a disjoint event.<ref>{{Cite book|last=Tijms|first=Henk|url=https://www.cambridge.org/core/books/understanding-probability/B82E701FAAD2C0C2CF36E05CFC0FF3F2|title=Understanding Probability|date=2012|publisher=Cambridge University Press|isbn=978-1-107-65856-1|edition=
It should also be noted that given the independent event pair [''A
: <math>P(AB \mid C) = P(A \mid C)P(B \mid C).</math>
This theorem
'''Independent events vs. mutually exclusive events'''
Line 329 ⟶ 317:
=== Assuming conditional probability is of similar size to its inverse ===
{{Main|Confusion of the inverse}}
[[File:Bayes theorem visualisation.svg|thumb|450x450px|A geometric visualization of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that <math>P(A
In general, it cannot be assumed that ''P''(''A''|''B'') ≈ ''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics.<ref>{{cite book |last=Paulos
:<math>\begin{align}
P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\
Line 344 ⟶ 332:
where the events <math>(B_n)</math> form a countable [[Partition of a set|partition]] of <math>\Omega</math>.
This fallacy may arise through [[selection bias]].<ref>
=== Over- or under-weighting priors ===
Line 352 ⟶ 340:
Formally, ''P''(''A'' | ''B'') is defined as the probability of ''A'' according to a new probability function on the sample space, such that outcomes not in ''B'' have probability 0 and that it is consistent with all original [[probability measure]]s.<ref>George Casella and Roger L. Berger (1990), ''Statistical Inference'', Duxbury Press, {{ISBN|0-534-11958-1}} (p. 18 ''et seq.'')</ref><ref name="grinstead">[http://math.dartmouth.edu/~prob/prob/prob.pdf Grinstead and Snell's Introduction to Probability], p. 134</ref>
Let Ω be a discrete [[sample space]] with [[elementary event]]s {''ω''}, and let ''P'' be the probability measure with respect to the [[σ-algebra]] of Ω. Suppose we are told that the event ''B'' ⊆ Ω has occurred. A new [[probability distribution]] (denoted by the conditional notation) is to be assigned on {''ω''} to reflect this. All events that are not in ''B'' will have null probability in the new distribution. For events in ''B'', two conditions must be met: the probability of ''B'' is one and the relative magnitudes of the probabilities must be preserved. The former is required by the [[Probability axioms|axioms of probability]], and the latter stems from the fact that the new probability measure has to be the analog of ''P'' in which the probability of ''B'' is
#<math>\omega \in B : P(\omega\mid B) = \alpha P(\omega)</math>
|