Conditional probability: Difference between revisions

Content deleted Content added
Undid revision 1150738271 by 2603:8000:7901:D04:6C40:49F6:1AAC:4913 (talk)
Common fallacies: Changed the diagram, for obvious reason
Line 328:
:''These fallacies should not be confused with Robert K. Shope's 1978 [http://lesswrong.com/r/discussion/lw/9om/the_conditional_fallacy_in_contemporary_philosophy/ "conditional fallacy"], which deals with counterfactual examples that [[beg the question]].''
 
=== Assuming conditional probability is of similar size to its inverse === <!-- Brief example (+ diagram?) would be nice here -->
{{Main|Confusion of the inverse}}
[[File:Bayes_theorem_assassinBayes theorem visualisation.svg|thumb|300px450x450px|A geometric visualisationvisualization of Bayes' theorem. In the table, the values 32, 13, 26 and 69 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A<nowiki>|</nowiki>B) P(B) = P(B<nowiki>|</nowiki>A) P(A) i.e. P(A<nowiki>|</nowiki>B) = {{sfrac|P(B<nowiki>|</nowiki>A) P(A)|P(B)}} . Similar reasoning can be used to show that P(Ā<nowiki>|</nowiki>B) = {{sfrac|P(B<nowiki>|</nowiki>Ā) P(Ā)|P(B)}} etc.]]
In general, it cannot be assumed that ''P''(''A''|''B'')&nbsp;≈&nbsp;''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics.<ref>Paulos, J.A. (1988) ''Innumeracy: Mathematical Illiteracy and its Consequences'', Hill and Wang. {{ISBN|0-8090-7447-8}} (p. 63 ''et seq.'')</ref> The relationship between ''P''(''A''|''B'') and ''P''(''B''|''A'') is given by [[Bayes' theorem]]:
 
:<math>\begin{align}
P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\