Content deleted Content added
m →Example: Changed the text inside the probability function to upright characters |
←Absolute probability; cf., e.g., https://books.google.de/books?id=Ui8GCAAAQBAJ&pg=PA1049 |
||
Line 1:
{{Short description|Probability of an event occurring, given that another event has already occurred}}
{{Probability fundamentals}}
In [[probability theory]], '''conditional probability''' is a measure of the [[probability]] of an [[Event (probability theory)|event]] occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred.<ref name="Allan Gut 2013">{{cite book |last=Gut |first=Allan |title=Probability: A Graduate Course |year=2013 |publisher=Springer |___location=New York, NY |isbn=978-1-4614-4707-8 |edition=Second }}</ref> This particular method relies on event B occurring with some sort of relationship with another event A. In this event, the event B can be analyzed by a conditional probability with respect to A. If the event of interest is {{mvar|A}} and the event {{mvar|B}} is known or assumed to have occurred, "the conditional probability of {{mvar|A}} given {{mvar|B}}", or "the probability of {{mvar|A}} under the condition {{mvar|B}}", is usually written as {{math|P(''A''{{!}}''B'')}}<ref name=":0">{{Cite web|title=Conditional Probability|url=https://www.mathsisfun.com/data/probability-events-conditional.html|access-date=2020-09-11|website=www.mathsisfun.com}}</ref> or occasionally {{math|P{{sub|''B''}}(''A'')}}. This can also be understood as the fraction of probability B that intersects with A: <math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=26|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X}}</ref>
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that {{math|P(Cough)}} = 5% and {{math|P(Cough{{!}}Sick)}} = 75 %. Although there is a relationship between {{mvar|A}} and {{mvar|B}} in this example, such a relationship or dependence between {{mvar|A}} and {{mvar|B}} is not necessary, nor do they have to occur simultaneously.
{{math|P(''A''{{!}}''B'')}} may or may not be equal to {{math|P(''A'')}}, i.e.,
While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability using [[Bayes' theorem]]: <math>P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=25–40|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X}}</ref>
== Definition ==
Line 17:
=== Conditioning on an event ===
==== [[Andrey Kolmogorov|Kolmogorov]] definition ====
Given two [[event (probability theory)|events]] {{mvar|A}} and {{mvar|B}} from the [[sigma-field]] of a probability space, with the [[marginal probability|unconditional probability]] of {{mvar|B}} being greater than zero (i.e., {{math|P(''B'') > 0)}}, the conditional probability of {{mvar|A}} given {{mvar|B}} (<math>P(A \mid B)</math>) is the probability of ''A'' occurring if ''B'' has or is assumed to have happened.<ref name=":1">{{Cite book|last=Reichl|first=Linda Elizabeth|title=A Modern Course in Statistical Physics|publisher=WILEY-VCH|year=2016|isbn=978-3-527-69049-7|edition=4th revised and updated|chapter=2.3 Probability}}</ref> ''A'' is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the [[quotient]] of the probability of the joint intersection of events {{mvar|A}} and {{mvar|B}} (<math>P(A \cap B)</math>)—the probability at which ''A'' and ''B'' occur together, although not necessarily occurring at the same time—and the [[probability]] of {{mvar|B}}:<ref name=":0" /><ref>{{citation|last=Kolmogorov|first=Andrey|title=Foundations of the Theory of Probability|publisher=Chelsea|year=1956 }}</ref><ref>{{Cite web|title=Conditional Probability|url=http://www.stat.yale.edu/Courses/1997-98/101/condprob.htm|access-date=2020-09-11|website=www.stat.yale.edu}}</ref>
|