Conditional probability: Difference between revisions

Content deleted Content added
Schlebe (talk | contribs)
m the verb is repeated 2 times. I have corrected the sentence to have only 1 verb
 
(36 intermediate revisions by 27 users not shown)
Line 1:
{{Short description|Probability of an event occurring, given that another event has already occurred}}
{{Probability fundamentals}}
In [[probability theory]], '''conditional probability''' is a measure of the [[probability]] of an [[Event (probability theory)|event]] occurring, given that another event (by assumption, presumption, assertion or evidence) hasis already known to have occurred.<ref name="Allan Gut 2013">{{cite book |last=Gut |first=Allan |title=Probability: A Graduate Course |year=2013 |publisher=Springer |___location=New York, NY |isbn=978-1-4614-4707-8 |edition=Second }}</ref> This particular method relies on event BA occurring with some sort of relationship with another event AB. In this eventsituation, the event BA can be analyzed by a conditional probability with respect to AB. If the event of interest is {{mvar|A}} and the event {{mvar|B}} is known or assumed to have occurred, "the conditional probability of {{mvar|A}} given {{mvar|B}}", or "the probability of {{mvar|A}} under the condition {{mvar|B}}", is usually written as {{math|P(''A''{{!}}''B'')}}<ref name=":0">{{Cite web|title=Conditional Probability|url=https://www.mathsisfun.com/data/probability-events-conditional.html|access-date=2020-09-11|website=www.mathsisfun.com}}</ref> or occasionally {{math|P{{sub|''B''}}(''A'')}}. This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): <math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=26|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X|url-access=subscription}}</ref>
 
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that {{math|P(Cough)}} = 5% and {{math|P(Cough{{!}}Sick)}} = 75 %. Although there is a relationship between {{mvar|A}} and {{mvar|B}} in this example, such a relationship or dependence between {{mvar|A}} and {{mvar|B}} is not necessary, nor do they have to occur simultaneously.
 
{{math|P(''A''{{!}}''B'')}} may or may not be equal to {{math|P(''A'')}}, i.e., (the '''unconditional probability''' or '''absolute probability''' of {{mvar|A}}). If {{math|1=P(''A''{{!}}''B'') = P(''A'')}}, then events {{mvar|A}} and {{mvar|B}} are said to be [[Independence (probability theory)#Two events|''independent'']]: in such a case, knowledge about either event does not alter the likelihood of each other. {{math|P(''A''{{!}}''B'')}} (the conditional probability of {{mvar|A}} given {{mvar|B}}) typically differs from {{math|P(''B''{{!}}''A'')}}. For example, if a person has [[dengue fever]], the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event {{mvar|B}} (''having dengue'') has occurred, the probability of {{mvar|A}} (''tested as positive'') given that {{mvar|B}} occurred is 90%, simply writing {{math|P(''A''{{!}}''B'')}} = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high [[false positive]] rates. In this case, the probability of the event {{mvar|B}} (''having dengue'') given that the event {{mvar|A}} (''testing positive'') has occurred is 15% or {{math|P(''B''{{!}}''A'')}} = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through [[base rate fallacy|base rate fallacies]].
 
While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability using [[Bayes' theorem]]: <math>P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=25–40|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X|url-access=subscription}}</ref> Another option is to display conditional probabilities in a [[conditional probability table]] to illuminate the relationship between events.
 
== Definition ==
[[File:Conditional probability.svg|thumb|Illustration of conditional probabilities with an [[Euler diagram]]. The unconditional [[probability]] P(''A'') = 0.30 + 0.10 + 0.12 = 0.52. However, the conditional probability ''P''(''A''&#124;{{pipe}}''B''{{sub|1}}) = 1, ''P''(''A''&#124;{{pipe}}''B''{{sub|2}}) = 0.12 ÷ (0.12 + 0.04) = 0.75, and P(''A''&#124;{{pipe}}''B''{{sub|3}}) = 0.]]
 
[[File:Probability tree diagram.svg|thumb|On a [[Tree diagram (probability theory)|tree diagram]], branch probabilities are conditional on the event associated with the parent node. (Here, the overbars indicate that the event does not occur.)]]
 
[[File:Venn Pie Chart describing Bayes' law.png|thumb|Venn Piepie Chartchart describing conditional probabilities]]
 
=== Conditioning on an event ===
 
==== [[Andrey Kolmogorov|Kolmogorov]] definition ====
Given two [[event (probability theory)|events]] {{mvar|A}} and {{mvar|B}} from the [[sigma-field]] of a probability space, with the [[marginal probability|unconditional probability]] of {{mvar|B}} being greater than zero (i.e., {{math|P(''B'') > 0)}}, the conditional probability of {{mvar|A}} given {{mvar|B}} (<math>P(A \mid B)</math>) is the probability of ''A'' occurring if ''B'' has or is assumed to have happened.<ref name=":1">{{Cite book|last=Reichl|first=Linda Elizabeth|title=A Modern Course in Statistical Physics|publisher=WILEY-VCH|year=2016|isbn=978-3-527-69049-7|edition=4th revised and updated|chapter=2.3 Probability}}</ref> ''A'' is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the [[quotient]] of the probability of the joint intersection of events {{mvar|A}} and {{mvar|B}}, that is, (<math>P(A \cap B)</math>)—the, the probability at which ''A'' and ''B'' occur together, although not necessarily occurring at the same time—andand the [[probability]] of {{mvar|B}}:<ref name=":0" /><ref>{{citation|last=Kolmogorov|first=Andrey|title=Foundations of the Theory of Probability|publisher=Chelsea|year=1956 }}</ref><ref>{{Cite web|title=Conditional Probability|url=http://www.stat.yale.edu/Courses/1997-98/101/condprob.htm|access-date=2020-09-11|website=www.stat.yale.edu}}</ref>
 
:<math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}. </math>.
 
For a sample space consisting of equal likelihood outcomes, the probability of the event ''A'' is understood as the fraction of the number of outcomes in ''A'' to the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the set <math>A \cap B</math> to the set ''B''. Note that the above equation is a definition, not just a theoretical result. We denote the quantity <math>\frac{P(A \cap B)}{P(B)}</math> as <math>P(A\mid B)</math> and call it the "conditional probability of {{mvar|A}} given {{mvar|B}}."
Line 28 ⟶ 27:
Some authors, such as [[Bruno de Finetti|de Finetti]], prefer to introduce conditional probability as an [[Probability axioms|axiom of probability]]:
 
:<math>P(A \cap B) = P(A \mid B)P(B). </math>.
 
This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of ''B'' occurring multiplied by the probability of ''A'' occurring, provided that ''B'' has occurred, is equal to the probability of the ''A'' and ''B'' occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major [[probability interpretations]], such as the [[Subjective probability|subjective theory]], conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of <math>A \cap B</math> and introduces a symmetry with the summation axiom for Poincaré Formula:
Line 34 ⟶ 33:
:<math>P(A \cup B) = P(A) + P(B) - P(A \cap B)</math>
:Thus the equations can be combined to find a new representation of the :
:<math> P(A \cap B)= P(A) + P(B) - P(A \cup B) = P(A \mid B)P(B) </math>
:<math> P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}} </math>
</math>
:<math> P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}}
</math>
 
==== As the probability of a conditional event ====
Line 44 ⟶ 40:
Conditional probability can be defined as the probability of a conditional event <math>A_B</math>. The [[Goodman–Nguyen–Van Fraassen algebra|Goodman–Nguyen–Van Fraassen]] conditional event can be defined as:
 
:<math>A_B = \bigcup_{i \ge 1} \left( \bigcap_{j<i} \overline{B}_j, A_i B_i \right), </math> where <math>A_i </math> and <math>B_i </math> represent states or elements of ''A'' or ''B.'' <ref>{{Cite journal|last1=Flaminio|first1=Tommaso|last2=Godo|first2=Lluis|last3=Hosni|first3=Hykel|date=2020-09-01|title=Boolean algebras of conditionals, probability and logic|url=https://www.sciencedirect.com/science/article/pii/S000437022030103X|journal=Artificial Intelligence|language=en|volume=286|pagesarticle-number=103347|doi=10.1016/j.artint.2020.103347|arxiv=2006.04673|s2cid=214584872 |issn=0004-3702}}</ref>
:<math>A_B =
\bigcup_{i \ge 1}
\left(
\bigcap_{j<i}
\overline{B}_j,
A_i B_i
\right)
</math>, where <math>A_i
</math> and <math>B_i
</math> represent states or elements of ''A'' or ''B.'' <ref>{{Cite journal|last1=Flaminio|first1=Tommaso|last2=Godo|first2=Lluis|last3=Hosni|first3=Hykel|date=2020-09-01|title=Boolean algebras of conditionals, probability and logic|url=https://www.sciencedirect.com/science/article/pii/S000437022030103X|journal=Artificial Intelligence|language=en|volume=286|pages=103347|doi=10.1016/j.artint.2020.103347|arxiv=2006.04673|s2cid=214584872 |issn=0004-3702}}</ref>
 
It can be shown that
Line 59 ⟶ 46:
:<math>P(A_B)= \frac{P(A \cap B)}{P(B)}</math>
 
which meets the Kolmogorov definition of conditional probability.<ref>{{Citation|last=Van Fraassen|first=Bas C.|title=Probabilities of Conditionals|date=1976|url=https://doi.org/10.1007/978-94-010-1853-1_10|work=Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science: Volume I Foundations and Philosophy of Epistemic Applications of Probability Theory|pages=261–308|editor-last=Harper|editor-first=William L.|series=The University of Western Ontario Series in Philosophy of Science|place=Dordrecht|publisher=Springer Netherlands|language=en|doi=10.1007/978-94-010-1853-1_10|isbn=978-94-010-1853-1|access-date=2021-12-04|editor2-last=Hooker|editor2-first=Clifford Alan|url-access=subscription}}</ref>
 
=== Conditioning on an event of probability zero ===
Line 66 ⟶ 53:
The case of greatest interest is that of a random variable {{mvar|Y}}, conditioned on a continuous random variable {{mvar|X}} resulting in a particular outcome {{mvar|x}}. The event <math>B = \{ X = x \}</math> has probability zero and, as such, cannot be conditioned on.
 
Instead of conditioning on {{mvar|X}} being ''exactly'' {{mvar|x}}, we could condition on it being closer than distance <math>\epsilonvarepsilon</math> away from {{mvar|x}}. The event <math>B = \{ x-\epsilonvarepsilon < X < x+\epsilonvarepsilon \}</math> will generally have nonzero probability and hence, can be conditioned on.
We can then take the [[limit (mathematics)|limit]]
{{NumBlk|::|<math>\lim_{\epsilonvarepsilon \to 0} P(A \mid x-\epsilonvarepsilon < X < x+\epsilonvarepsilon).</math>|{{EquationRef|1}}}}
 
For example, if two continuous random variables {{mvar|X}} and {{mvar|Y}} have a joint density <math>f_{X,Y}(x,y)</math>, then by [[L'Hôpital's rule]] and [[Leibniz integral rule]], upon differentiation with respect to <math>\epsilonvarepsilon</math>:
:<math>
\begin{aligned}
\lim_{\epsilonvarepsilon \to 0} P(Y \in U \mid x_0-\epsilonvarepsilon < X < x_0+\epsilonvarepsilon) &=
\lim_{\epsilonvarepsilon \to 0} \frac{\int_{x_0-\epsilonvarepsilon}^{x_0+\epsilonvarepsilon} \int_U f_{X, Y}(x, y) \, \mathrm{d}y \, \mathrm{d}x}{\int_{x_0-\epsilonvarepsilon}^{x_0+\epsilonvarepsilon} \int_\mathbb{R} f_{X, Y}(x, y) \, \mathrm{d}y \, \mathrm{d}x} \\[6pt]
&= \frac{\int_U f_{X, Y}(x_0, y) \, \mathrm{d}y}{\int_\mathbb{R} f_{X, Y}(x_0, y) \, \mathrm{d}y}.
\end{aligned}
</math>
The resulting limit is the [[conditional probability distribution]] of {{mvar|Y}} given {{mvar|X}} and exists when the denominator, the probability density <math>f_X(x_0)</math>, is strictly positive.
 
It is tempting to ''define'' the undefined probability <math>P(A \mid X=x)</math> using this limit ({{EquationNote|1}}), but this cannot be done in a consistent manner. In particular, it is possible to find random variables {{mvar|X}} and {{mvar|W}} and values {{mvar|x}}, {{mvar|w}} such that the events <math>\{X = x\}</math> and <math>\{W = w\}</math> are identical but the resulting limits are not:<ref>{{cite web |last1=Gal |first1=Yarin |title=The Borel–Kolmogorov paradox |url=https://www.cs.ox.ac.uk/people/yarin.gal/website/PDFs/Short-talk-03-2014.pdf}}</ref>
:<math>\lim_{\epsilonvarepsilon \to 0} P(A \mid x-\epsilonvarepsilon \le X \le x+\epsilonvarepsilon) \neq \lim_{\epsilonvarepsilon \to 0} P(A \mid w-\epsilonvarepsilon \le W \le w+\epsilonvarepsilon).</math>
The [[Borel–Kolmogorov paradox]] demonstrates this with a geometrical argument.
 
=== Conditioning on a discrete random variable ===
{{See also|Conditional probability distribution|Conditional expectation|Regular conditional probability}}
Let {{mvar|X}} be a discrete random variable and its possible outcomes denoted {{mvar|V}}. For example, if {{mvar|X}} represents the value of a rolled diedice then {{mvar|V}} is the set <math>\{ 1, 2, 3, 4, 5, 6 \}</math>. Let us assume for the sake of presentation that {{mvar|X}} is a discrete random variable, so that each value in {{mvar|V}} has a nonzero probability.
 
For a value {{mvar|x}} in {{mvar|V}} and an event {{mvar|A}}, the conditional probability
Line 123 ⟶ 110:
where <math> b_i n \in \mathbb{N}</math><ref name=Draheim2017b />
 
[[Radical probabilism|Jeffrey conditionalization]]<ref>{{citation|first=Richard C.|last=Jeffrey|title=The Logic of Decision, |edition=2nd edition|publisher=University of Chicago Press|year=1983 |isbn=9780226395821|url=https://books.google.com/books?id=geJ-SwTcmyEC&q=%22conditional+probability%22}}</ref><ref>{{cite web|title=Bayesian Epistemology| url=https://plato.stanford.edu/entries/epistemology-bayesian/|publisher=Stanford Encyclopedia of Philosophy|access-date=December 29, 2017|year=2017 }}</ref>
is a special case of partial conditional probability, in which the condition events must form a [[Partition of a set|partition]]:
 
Line 133 ⟶ 120:
== Example ==
Suppose that somebody secretly rolls two fair six-sided [[dice]], and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5.
* Let ''D''<sub>1</sub> be the value rolled on [[dice|die]] 1.
* Let ''D''<sub>2</sub> be the value rolled on [[dice|die]] 2.
 
'''''Probability that'' ''D''<sub>1</sub>&nbsp;=&nbsp;2'''
Line 267 ⟶ 254:
 
=== Example ===
When [[Morse code]] is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by: <math>P(dot \text{dot sent \mid} dot| \text{ dot received}) = P(\text{dot \ received \mid} dot| \text{ dot sent}) \frac{P(\text{dot \ sent})}{P(\text{dot \ received})}.</math> In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the probabilityprobabilities of a "dot" and "dash" are <math>P(\text{dot \ sent}) = \frac {3}{7} \text{ and \ } P(\text{dash \ sent}) = \frac {4}{7}</math>. If it is assumed that the probability that a dot is transmitted as a dash is 1/10, and that the probability that a dash is transmitted as a dot is likewise 1/10, then Bayes's rule can be used to calculate <math>P(\text{dot \ received})</math>.
 
: <math>P(\text{dot \ received}) = P(\text{dot \ received \ } \cap \ text{dot \ sent }) + P(\text{dot \ received \} \cap \ text{dash \ sent})</math>
 
: <math>P(\text{dot \ received}) = P(\text{dot \ received} \mid \text{dot \ sent})P(\text{dot \ sent}) + P(\text{dot \ received} \mid \text{dash \ sent})P(\text{dash \ sent})</math>
 
: <math>P(dot \text{dot received}) = \frac{9}{10}\times\frac{3}{7} + \frac{1}{10}\times\frac{4}{7} = \frac{31}{70}</math>
 
Now, <math>P(\text{dot \ sent} \mid \text{dot \ received})</math> can be calculated:
 
: <math>P(\text{dot \ sent} \mid \text{dot \ received}) = P(\text{dot \ received} \mid dot \text{dot sent}) \frac{P(\text{dot \ sent})}{P(\text{dot \ received})} = \frac{9}{10}\times \frac{\frac{3}{7}}{\frac{31}{70}} = \frac{27}{31}</math><ref>{{Cite web|title=Conditional Probability and Independence|url=http://www.math.ntu.edu.tw/~hchen/teaching/StatInference/notes/lecture4.pdf|access-date=2021-12-22}}</ref>
 
== Statistical independence ==
Line 294 ⟶ 281:
:<math>P(B\mid A) = P(B)</math>
 
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in ''A'' and ''B''. Independence does not refer to a disjoint event.<ref>{{Cite book|last=Tijms|first=Henk|url=https://www.cambridge.org/core/books/understanding-probability/B82E701FAAD2C0C2CF36E05CFC0FF3F2|title=Understanding Probability|date=2012|publisher=Cambridge University Press|isbn=978-1-107-65856-1|edition=33rd|___location=Cambridge|doi=10.1017/cbo9781139206990}}</ref>
 
It should also be noted that given the independent event pair [''A '',''B''] and an event ''C'', the pair is defined to be [[Conditional independence|conditionally independent]] if the product holds true:<ref>{{Cite book|last=Pfeiffer|first=Paul E.|url=https://www.worldcat.org/oclc/858880328|title=Conditional Independence in Applied Probability|date=1978|publisher=Birkhäuser Boston|isbn=978-1-4612-6335-7|___location=Boston, MA|oclc=858880328}}</ref>
 
: <math>P(AB \mid C) = P(A \mid C)P(B \mid C).</math>
 
This theorem could beis useful in applications where multiple independent events are being observed.
 
'''Independent events vs. mutually exclusive events'''
Line 328 ⟶ 315:
:''These fallacies should not be confused with Robert K. Shope's 1978 [http://lesswrong.com/r/discussion/lw/9om/the_conditional_fallacy_in_contemporary_philosophy/ "conditional fallacy"], which deals with counterfactual examples that [[beg the question]].''
 
=== Assuming conditional probability is of similar size to its inverse === <!-- Brief example (+ diagram?) would be nice here -->
{{Main|Confusion of the inverse}}
[[File:Bayes_theorem_assassinBayes theorem visualisation.svg|thumb|300px450x450px|A geometric visualisationvisualization of Bayes' theorem. In the table, the values 32, 13, 26 and 69 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that <math>P(A<nowiki>|</nowiki>\mid B) P(B) = P(B<nowiki>|</nowiki>\mid A) P(A)</math> i.e. <math>P(A<nowiki>|</nowiki>\mid B) = \frac{{sfrac|P(B<nowiki>|</nowiki>\mid A)} {P(A)|\cdot P(B)}}</math> . Similar reasoning can be used to show that <math>P(Ā<nowiki>|</nowiki>\bar A\mid B) = \frac{{sfrac|P(B<nowiki>|</nowiki>Ā\mid\bar A) P(Ā\bar A)|}{P(B)}}</math> etc.]]
In general, it cannot be assumed that ''P''(''A''|''B'')&nbsp;≈&nbsp;''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics.<ref>{{cite book |last=Paulos, |first=J. A. (|year=1988) ''|title=Innumeracy: Mathematical Illiteracy and its Consequences'', |publisher=Hill and Wang. {{ISBN|isbn=0-8090-7447-8}} (|at=p. 63 ''et seq.'') }}</ref> The relationship between ''P''(''A''|''B'') and ''P''(''B''|''A'') is given by [[Bayes' theorem]]:
 
:<math>\begin{align}
P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\
Line 338 ⟶ 324:
\end{align}</math>
 
That is, ''P''(''A''|''B'')&nbsp;≈&nbsp;''P''(''B''|''A'') only if ''P''(''B'')/''P''(''A'')&nbsp;≈&nbsp;1, or equivalently, ''P''(''A'')&nbsp;≈&nbsp;''P''(''B'').
 
=== Assuming marginal and conditional probabilities are of similar size === <!-- Diagram might be nice here -->
Line 346 ⟶ 332:
where the events <math>(B_n)</math> form a countable [[Partition of a set|partition]] of <math>\Omega</math>.
 
This fallacy may arise through [[selection bias]].<ref>[[{{cite journal |first=F. Thomas |last=Bruss |authorlink=F. Thomas Bruss]] |title=Der Wyatt-Earp-Effekt oder die betörende Macht kleiner Wahrscheinlichkeiten (in German),|language=de |journal=[[Spektrum der Wissenschaft]] (German Edition of Scientific American), Vol |volume=2, |pages=110–113, (|year=2007). }}</ref> For example, in the context of a medical claim, let ''S''{{sub|''C''}} be the event that a [[sequelae|sequela]] (chronic disease) ''S'' occurs as a consequence of circumstance (acute condition) ''C''. Let ''H'' be the event that an individual seeks medical help. Suppose that in most cases, ''C'' does not cause ''S'' (so that ''P''(''S''{{sub|''C''}}) is low). Suppose also that medical attention is only sought if ''S'' has occurred due to ''C''. From experience of patients, a doctor may therefore erroneously conclude that ''P''(''S''{{sub|''C''}}) is high. The actual probability observed by the doctor is ''P''(''S''{{sub|''C''}}|''H'').
 
=== Over- or under-weighting priors ===
Line 354 ⟶ 340:
Formally, ''P''(''A''&nbsp;|&nbsp;''B'') is defined as the probability of ''A'' according to a new probability function on the sample space, such that outcomes not in ''B'' have probability 0 and that it is consistent with all original [[probability measure]]s.<ref>George Casella and Roger L. Berger (1990), ''Statistical Inference'', Duxbury Press, {{ISBN|0-534-11958-1}} (p. 18 ''et seq.'')</ref><ref name="grinstead">[http://math.dartmouth.edu/~prob/prob/prob.pdf Grinstead and Snell's Introduction to Probability], p. 134</ref>
 
Let Ω be a discrete [[sample space]] with [[elementary event]]s {''ω''}, and let ''P'' be the probability measure with respect to the [[σ-algebra]] of Ω. Suppose we are told that the event ''B''&nbsp;⊆&nbsp;Ω has occurred. A new [[probability distribution]] (denoted by the conditional notation) is to be assigned on {''ω''} to reflect this. All events that are not in ''B'' will have null probability in the new distribution. For events in ''B'', two conditions must be met: the probability of ''B'' is one and the relative magnitudes of the probabilities must be preserved. The former is required by the [[Probability axioms|axioms of probability]], and the latter stems from the fact that the new probability measure has to be the analog of ''P'' in which the probability of ''B'' is one - andone—and every event that is not in ''B'', therefore, has a null probability. Hence, for some scale factor ''α'', the new distribution must satisfy:
 
#<math>\omega \in B : P(\omega\mid B) = \alpha P(\omega)</math>
Line 370 ⟶ 356:
\end{align}</math>
 
So the new [[probability distribution]] is
 
#<math>\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)}</math>
Line 395 ⟶ 381:
* [[Conditional probability distribution]]
* [[Conditioning (probability)]]
* [[Disintegration theorem]]
* [[Joint probability distribution]]
* [[Monty Hall problem]]
* [[Pairwise independence|Pairwise independent distribution]]
* [[Posterior probability]]
* [[Postselection]]
* [[Regular conditional probability]]