Conditional probability: Difference between revisions

Content deleted Content added
Angulon (talk | contribs)
m Example: Changed the text inside the probability function to upright characters
Absolute probability; cf., e.g., https://books.google.de/books?id=Ui8GCAAAQBAJ&pg=PA1049
Line 1:
{{Short description|Probability of an event occurring, given that another event has already occurred}}
{{Probability fundamentals}}
In [[probability theory]], '''conditional probability''' is a measure of the [[probability]] of an [[Event (probability theory)|event]] occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred.<ref name="Allan Gut 2013">{{cite book |last=Gut |first=Allan |title=Probability: A Graduate Course |year=2013 |publisher=Springer |___location=New York, NY |isbn=978-1-4614-4707-8 |edition=Second }}</ref> This particular method relies on event B occurring with some sort of relationship with another event A. In this event, the event B can be analyzed by a conditional probability with respect to A. If the event of interest is {{mvar|A}} and the event {{mvar|B}} is known or assumed to have occurred, "the conditional probability of {{mvar|A}} given {{mvar|B}}", or "the probability of {{mvar|A}} under the condition {{mvar|B}}", is usually written as {{math|P(''A''{{!}}''B'')}}<ref name=":0">{{Cite web|title=Conditional Probability|url=https://www.mathsisfun.com/data/probability-events-conditional.html|access-date=2020-09-11|website=www.mathsisfun.com}}</ref> or occasionally {{math|P{{sub|''B''}}(''A'')}}. This can also be understood as the fraction of probability B that intersects with A: <math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=26|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X}}</ref>
 
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that {{math|P(Cough)}} = 5% and {{math|P(Cough{{!}}Sick)}} = 75 %. Although there is a relationship between {{mvar|A}} and {{mvar|B}} in this example, such a relationship or dependence between {{mvar|A}} and {{mvar|B}} is not necessary, nor do they have to occur simultaneously.
 
{{math|P(''A''{{!}}''B'')}} may or may not be equal to {{math|P(''A'')}}, i.e., (the '''unconditional probability''' or '''absolute probability''' of {{mvar|A}}). If {{math|1=P(''A''{{!}}''B'') = P(''A'')}}, then events {{mvar|A}} and {{mvar|B}} are said to be [[Independence (probability theory)#Two events|''independent'']]: in such a case, knowledge about either event does not alter the likelihood of each other. {{math|P(''A''{{!}}''B'')}} (the conditional probability of {{mvar|A}} given {{mvar|B}}) typically differs from {{math|P(''B''{{!}}''A'')}}. For example, if a person has [[dengue fever]], the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event {{mvar|B}} (''having dengue'') has occurred, the probability of {{mvar|A}} (''tested as positive'') given that {{mvar|B}} occurred is 90%, simply writing {{math|P(''A''{{!}}''B'')}} = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high [[false positive]] rates. In this case, the probability of the event {{mvar|B}} (''having dengue'') given that the event {{mvar|A}} (''testing positive'') has occurred is 15% or {{math|P(''B''{{!}}''A'')}} = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through [[base rate fallacy|base rate fallacies]].
 
While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability using [[Bayes' theorem]]: <math>P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=25–40|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X}}</ref> Another option is to display conditional probabilities in a [[conditional probability table]] to illuminate the relationship between events.
 
== Definition ==
Line 17:
 
=== Conditioning on an event ===
 
==== [[Andrey Kolmogorov|Kolmogorov]] definition ====
Given two [[event (probability theory)|events]] {{mvar|A}} and {{mvar|B}} from the [[sigma-field]] of a probability space, with the [[marginal probability|unconditional probability]] of {{mvar|B}} being greater than zero (i.e., {{math|P(''B'') > 0)}}, the conditional probability of {{mvar|A}} given {{mvar|B}} (<math>P(A \mid B)</math>) is the probability of ''A'' occurring if ''B'' has or is assumed to have happened.<ref name=":1">{{Cite book|last=Reichl|first=Linda Elizabeth|title=A Modern Course in Statistical Physics|publisher=WILEY-VCH|year=2016|isbn=978-3-527-69049-7|edition=4th revised and updated|chapter=2.3 Probability}}</ref> ''A'' is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the [[quotient]] of the probability of the joint intersection of events {{mvar|A}} and {{mvar|B}} (<math>P(A \cap B)</math>)—the probability at which ''A'' and ''B'' occur together, although not necessarily occurring at the same time—and the [[probability]] of {{mvar|B}}:<ref name=":0" /><ref>{{citation|last=Kolmogorov|first=Andrey|title=Foundations of the Theory of Probability|publisher=Chelsea|year=1956 }}</ref><ref>{{Cite web|title=Conditional Probability|url=http://www.stat.yale.edu/Courses/1997-98/101/condprob.htm|access-date=2020-09-11|website=www.stat.yale.edu}}</ref>