Conditional probability: Difference between revisions

Content deleted Content added
 
(7 intermediate revisions by 3 users not shown)
Line 14:
[[File:Probability tree diagram.svg|thumb|On a [[Tree diagram (probability theory)|tree diagram]], branch probabilities are conditional on the event associated with the parent node. (Here, the overbars indicate that the event does not occur.)]]
 
[[File:Venn Pie Chart describing Bayes' law.png|thumb|Venn Piepie Chartchart describing conditional probabilities]]
 
=== Conditioning on an event ===
Line 40:
Conditional probability can be defined as the probability of a conditional event <math>A_B</math>. The [[Goodman–Nguyen–Van Fraassen algebra|Goodman–Nguyen–Van Fraassen]] conditional event can be defined as:
 
:<math>A_B = \bigcup_{i \ge 1} \left( \bigcap_{j<i} \overline{B}_j, A_i B_i \right), </math> where <math>A_i </math> and <math>B_i </math> represent states or elements of ''A'' or ''B.'' <ref>{{Cite journal|last1=Flaminio|first1=Tommaso|last2=Godo|first2=Lluis|last3=Hosni|first3=Hykel|date=2020-09-01|title=Boolean algebras of conditionals, probability and logic|url=https://www.sciencedirect.com/science/article/pii/S000437022030103X|journal=Artificial Intelligence|language=en|volume=286|pagesarticle-number=103347|doi=10.1016/j.artint.2020.103347|arxiv=2006.04673|s2cid=214584872 |issn=0004-3702}}</ref>
 
It can be shown that
Line 53:
The case of greatest interest is that of a random variable {{mvar|Y}}, conditioned on a continuous random variable {{mvar|X}} resulting in a particular outcome {{mvar|x}}. The event <math>B = \{ X = x \}</math> has probability zero and, as such, cannot be conditioned on.
 
Instead of conditioning on {{mvar|X}} being ''exactly'' {{mvar|x}}, we could condition on it being closer than distance <math>\epsilonvarepsilon</math> away from {{mvar|x}}. The event <math>B = \{ x-\epsilonvarepsilon < X < x+\epsilonvarepsilon \}</math> will generally have nonzero probability and hence, can be conditioned on.
We can then take the [[limit (mathematics)|limit]]
{{NumBlk|::|<math>\lim_{\epsilonvarepsilon \to 0} P(A \mid x-\epsilonvarepsilon < X < x+\epsilonvarepsilon).</math>|{{EquationRef|1}}}}
 
For example, if two continuous random variables {{mvar|X}} and {{mvar|Y}} have a joint density <math>f_{X,Y}(x,y)</math>, then by [[L'Hôpital's rule]] and [[Leibniz integral rule]], upon differentiation with respect to <math>\epsilonvarepsilon</math>:
:<math>
\begin{aligned}
\lim_{\epsilonvarepsilon \to 0} P(Y \in U \mid x_0-\epsilonvarepsilon < X < x_0+\epsilonvarepsilon) &=
\lim_{\epsilonvarepsilon \to 0} \frac{\int_{x_0-\epsilonvarepsilon}^{x_0+\epsilonvarepsilon} \int_U f_{X, Y}(x, y) \, \mathrm{d}y \, \mathrm{d}x}{\int_{x_0-\epsilonvarepsilon}^{x_0+\epsilonvarepsilon} \int_\mathbb{R} f_{X, Y}(x, y) \, \mathrm{d}y \, \mathrm{d}x} \\[6pt]
&= \frac{\int_U f_{X, Y}(x_0, y) \, \mathrm{d}y}{\int_\mathbb{R} f_{X, Y}(x_0, y) \, \mathrm{d}y}.
\end{aligned}
</math>
Line 68:
 
It is tempting to ''define'' the undefined probability <math>P(A \mid X=x)</math> using limit ({{EquationNote|1}}), but this cannot be done in a consistent manner. In particular, it is possible to find random variables {{mvar|X}} and {{mvar|W}} and values {{mvar|x}}, {{mvar|w}} such that the events <math>\{X = x\}</math> and <math>\{W = w\}</math> are identical but the resulting limits are not:
:<math>\lim_{\epsilonvarepsilon \to 0} P(A \mid x-\epsilonvarepsilon \le X \le x+\epsilonvarepsilon) \neq \lim_{\epsilonvarepsilon \to 0} P(A \mid w-\epsilonvarepsilon \le W \le w+\epsilonvarepsilon).</math>
The [[Borel–Kolmogorov paradox]] demonstrates this with a geometrical argument.
 
Line 110:
where <math> b_i n \in \mathbb{N}</math><ref name=Draheim2017b />
 
[[Radical probabilism|Jeffrey conditionalization]]<ref>{{citation|first=Richard C.|last=Jeffrey|title=The Logic of Decision, |edition=2nd edition|publisher=University of Chicago Press|year=1983 |isbn=9780226395821|url=https://books.google.com/books?id=geJ-SwTcmyEC&q=%22conditional+probability%22}}</ref><ref>{{cite web|title=Bayesian Epistemology| url=https://plato.stanford.edu/entries/epistemology-bayesian/|publisher=Stanford Encyclopedia of Philosophy|access-date=December 29, 2017|year=2017 }}</ref>
is a special case of partial conditional probability, in which the condition events must form a [[Partition of a set|partition]]:
 
Line 254:
 
=== Example ===
When [[Morse code]] is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by: <math>P(\text{dot sent } | \text{ dot received}) = P(\text{dot received } | \text{ dot sent}) \frac{P(\text{dot sent})}{P(\text{dot received})}.</math> In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the probabilityprobabilities of a "dot" and "dash" are <math>P(\text{dot sent}) = \frac {3}{7} \text{ and \ } P(\text{dash sent}) = \frac {4}{7}</math>. If it is assumed that the probability that a dot is transmitted as a dash is 1/10, and that the probability that a dash is transmitted as a dot is likewise 1/10, then Bayes's rule can be used to calculate <math>P(\text{dot received})</math>.
 
: <math>P(\text{dot received}) = P(\text{dot received } \cap \text{ dot sent}) + P(\text{dot received } \cap \text{ dash sent})</math>
 
: <math>P(\text{dot received}) = P(\text{dot received } \mid \text{ dot sent})P(\text{dot sent}) + P(\text{dot received } \mid \text{ dash sent})P(\text{dash sent})</math>
 
: <math>P(\text{dot received}) = \frac{9}{10}\times\frac{3}{7} + \frac{1}{10}\times\frac{4}{7} = \frac{31}{70}</math>
 
Now, <math>P(\text{dot sent } \mid \text{ dot received})</math> can be calculated:
 
: <math>P(\text{dot sent } \mid \text{ dot received}) = P(\text{dot received } \mid \text{ dot sent}) \frac{P(\text{dot sent})}{P(\text{dot received})} = \frac{9}{10}\times \frac{\frac{3}{7}}{\frac{31}{70}} = \frac{27}{31}</math><ref>{{Cite web|title=Conditional Probability and Independence|url=http://www.math.ntu.edu.tw/~hchen/teaching/StatInference/notes/lecture4.pdf|access-date=2021-12-22}}</ref>
 
== Statistical independence ==
Line 281:
:<math>P(B\mid A) = P(B)</math>
 
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in ''A'' and ''B''. Independence does not refer to a disjoint event.<ref>{{Cite book|last=Tijms|first=Henk|url=https://www.cambridge.org/core/books/understanding-probability/B82E701FAAD2C0C2CF36E05CFC0FF3F2|title=Understanding Probability|date=2012|publisher=Cambridge University Press|isbn=978-1-107-65856-1|edition=33rd|___location=Cambridge|doi=10.1017/cbo9781139206990}}</ref>
 
It should also be noted that given the independent event pair [''A '',''B''] and an event ''C'', the pair is defined to be [[Conditional independence|conditionally independent]] if the product holds true:<ref>{{Cite book|last=Pfeiffer|first=Paul E.|url=https://www.worldcat.org/oclc/858880328|title=Conditional Independence in Applied Probability|date=1978|publisher=Birkhäuser Boston|isbn=978-1-4612-6335-7|___location=Boston, MA|oclc=858880328}}</ref>
 
: <math>P(AB \mid C) = P(A \mid C)P(B \mid C).</math>
 
This theorem could beis useful in applications where multiple independent events are being observed.
 
'''Independent events vs. mutually exclusive events'''
Line 317:
=== Assuming conditional probability is of similar size to its inverse ===
{{Main|Confusion of the inverse}}
[[File:Bayes theorem visualisation.svg|thumb|450x450px|A geometric visualization of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that <math>P(A|\mid B) P(B) = P(B|\mid A) P(A)</math> i.e. <math>P(A|\mid B) = \frac{P(B|\mid A)} {P(A)|\cdot P(B)}</math> . Similar reasoning can be used to show that <math>P(\bar A|\mid B) = \frac{P(B|\mid\bar A) P(\bar A)}{P(B)}</math> etc.]]
In general, it cannot be assumed that ''P''(''A''|''B'')&nbsp;≈&nbsp;''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics.<ref>{{cite book |last=Paulos, |first=J. A. (|year=1988) ''|title=Innumeracy: Mathematical Illiteracy and its Consequences'', |publisher=Hill and Wang. {{ISBN|isbn=0-8090-7447-8}} (|at=p. 63 ''et seq.'') }}</ref> The relationship between ''P''(''A''|''B'') and ''P''(''B''|''A'') is given by [[Bayes' theorem]]:
:<math>\begin{align}
P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\
Line 332:
where the events <math>(B_n)</math> form a countable [[Partition of a set|partition]] of <math>\Omega</math>.
 
This fallacy may arise through [[selection bias]].<ref>[[{{cite journal |first=F. Thomas |last=Bruss |authorlink=F. Thomas Bruss]] |title=Der Wyatt-Earp-Effekt oder die betörende Macht kleiner Wahrscheinlichkeiten (in German),|language=de |journal=[[Spektrum der Wissenschaft]] (German Edition of Scientific American), Vol |volume=2, |pages=110–113, (|year=2007). }}</ref> For example, in the context of a medical claim, let ''S''{{sub|''C''}} be the event that a [[sequelae|sequela]] (chronic disease) ''S'' occurs as a consequence of circumstance (acute condition) ''C''. Let ''H'' be the event that an individual seeks medical help. Suppose that in most cases, ''C'' does not cause ''S'' (so that ''P''(''S''{{sub|''C''}}) is low). Suppose also that medical attention is only sought if ''S'' has occurred due to ''C''. From experience of patients, a doctor may therefore erroneously conclude that ''P''(''S''{{sub|''C''}}) is high. The actual probability observed by the doctor is ''P''(''S''{{sub|''C''}}|''H'').
 
=== Over- or under-weighting priors ===
Line 340:
Formally, ''P''(''A''&nbsp;|&nbsp;''B'') is defined as the probability of ''A'' according to a new probability function on the sample space, such that outcomes not in ''B'' have probability 0 and that it is consistent with all original [[probability measure]]s.<ref>George Casella and Roger L. Berger (1990), ''Statistical Inference'', Duxbury Press, {{ISBN|0-534-11958-1}} (p. 18 ''et seq.'')</ref><ref name="grinstead">[http://math.dartmouth.edu/~prob/prob/prob.pdf Grinstead and Snell's Introduction to Probability], p. 134</ref>
 
Let Ω be a discrete [[sample space]] with [[elementary event]]s {''ω''}, and let ''P'' be the probability measure with respect to the [[σ-algebra]] of Ω. Suppose we are told that the event ''B''&nbsp;⊆&nbsp;Ω has occurred. A new [[probability distribution]] (denoted by the conditional notation) is to be assigned on {''ω''} to reflect this. All events that are not in ''B'' will have null probability in the new distribution. For events in ''B'', two conditions must be met: the probability of ''B'' is one and the relative magnitudes of the probabilities must be preserved. The former is required by the [[Probability axioms|axioms of probability]], and the latter stems from the fact that the new probability measure has to be the analog of ''P'' in which the probability of ''B'' is one - andone—and every event that is not in ''B'', therefore, has a null probability. Hence, for some scale factor ''α'', the new distribution must satisfy:
 
#<math>\omega \in B : P(\omega\mid B) = \alpha P(\omega)</math>