Content deleted Content added
→Assuming conditional probability is of similar size to its inverse: fix math italics |
Bangarmaine (talk | contribs) m Bad phrasing |
||
Line 3:
In [[probability theory]], '''conditional probability''' is a measure of the [[probability]] of an [[Event (probability theory)|event]] occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred.<ref name="Allan Gut 2013">{{cite book |last=Gut |first=Allan |title=Probability: A Graduate Course |year=2013 |publisher=Springer |___location=New York, NY |isbn=978-1-4614-4707-8 |edition=Second }}</ref> This particular method relies on event A occurring with some sort of relationship with another event B. In this situation, the event A can be analyzed by a conditional probability with respect to B. If the event of interest is {{mvar|A}} and the event {{mvar|B}} is known or assumed to have occurred, "the conditional probability of {{mvar|A}} given {{mvar|B}}", or "the probability of {{mvar|A}} under the condition {{mvar|B}}", is usually written as {{math|P(''A''{{!}}''B'')}}<ref name=":0">{{Cite web|title=Conditional Probability|url=https://www.mathsisfun.com/data/probability-events-conditional.html|access-date=2020-09-11|website=www.mathsisfun.com}}</ref> or occasionally {{math|P{{sub|''B''}}(''A'')}}. This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): <math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}</math>.<ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|pages=26|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1 |issn=1431-875X}}</ref>
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone
{{math|P(''A''{{!}}''B'')}} may or may not be equal to {{math|P(''A'')}}, i.e., the '''unconditional probability''' or '''absolute probability''' of {{mvar|A}}. If {{math|1=P(''A''{{!}}''B'') = P(''A'')}}, then events {{mvar|A}} and {{mvar|B}} are said to be [[Independence (probability theory)#Two events|''independent'']]: in such a case, knowledge about either event does not alter the likelihood of each other. {{math|P(''A''{{!}}''B'')}} (the conditional probability of {{mvar|A}} given {{mvar|B}}) typically differs from {{math|P(''B''{{!}}''A'')}}. For example, if a person has [[dengue fever]], the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event {{mvar|B}} (''having dengue'') has occurred, the probability of {{mvar|A}} (''tested as positive'') given that {{mvar|B}} occurred is 90%, simply writing {{math|P(''A''{{!}}''B'')}} = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high [[false positive]] rates. In this case, the probability of the event {{mvar|B}} (''having dengue'') given that the event {{mvar|A}} (''testing positive'') has occurred is 15% or {{math|P(''B''{{!}}''A'')}} = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through [[base rate fallacy|base rate fallacies]].
|