Content deleted Content added
Citation bot (talk | contribs) Added article-number. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | #UCB_CommandLine |
|||
(4 intermediate revisions by the same user not shown) | |||
Line 110:
where <math> b_i n \in \mathbb{N}</math><ref name=Draheim2017b />
[[Radical probabilism|Jeffrey conditionalization]]<ref>{{citation|first=Richard C.|last=Jeffrey|title=The Logic of Decision
is a special case of partial conditional probability, in which the condition events must form a [[Partition of a set|partition]]:
Line 281:
:<math>P(B\mid A) = P(B)</math>
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in ''A'' and ''B''. Independence does not refer to a disjoint event.<ref>{{Cite book|last=Tijms|first=Henk|url=https://www.cambridge.org/core/books/understanding-probability/B82E701FAAD2C0C2CF36E05CFC0FF3F2|title=Understanding Probability|date=2012|publisher=Cambridge University Press|isbn=978-1-107-65856-1|edition=
It should also be noted that given the independent event pair [''A'',''B''] and an event ''C'', the pair is defined to be [[Conditional independence|conditionally independent]] if<ref>{{Cite book|last=Pfeiffer|first=Paul E.|title=Conditional Independence in Applied Probability|date=1978|publisher=Birkhäuser Boston|isbn=978-1-4612-6335-7|___location=Boston, MA|oclc=858880328}}</ref>
Line 318:
{{Main|Confusion of the inverse}}
[[File:Bayes theorem visualisation.svg|thumb|450x450px|A geometric visualization of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that <math>P(A\mid B) P(B) = P(B\mid A) P(A)</math> i.e. <math>P(A\mid B) = \frac{P(B\mid A)} {P(A)\cdot P(B)}</math> . Similar reasoning can be used to show that <math>P(\bar A\mid B) = \frac{P(B\mid\bar A) P(\bar A)}{P(B)}</math> etc.]]
In general, it cannot be assumed that ''P''(''A''|''B'') ≈ ''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics.<ref>{{cite book |last=Paulos
:<math>\begin{align}
P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\
Line 332:
where the events <math>(B_n)</math> form a countable [[Partition of a set|partition]] of <math>\Omega</math>.
This fallacy may arise through [[selection bias]].<ref>
=== Over- or under-weighting priors ===
Line 340:
Formally, ''P''(''A'' | ''B'') is defined as the probability of ''A'' according to a new probability function on the sample space, such that outcomes not in ''B'' have probability 0 and that it is consistent with all original [[probability measure]]s.<ref>George Casella and Roger L. Berger (1990), ''Statistical Inference'', Duxbury Press, {{ISBN|0-534-11958-1}} (p. 18 ''et seq.'')</ref><ref name="grinstead">[http://math.dartmouth.edu/~prob/prob/prob.pdf Grinstead and Snell's Introduction to Probability], p. 134</ref>
Let Ω be a discrete [[sample space]] with [[elementary event]]s {''ω''}, and let ''P'' be the probability measure with respect to the [[σ-algebra]] of Ω. Suppose we are told that the event ''B'' ⊆ Ω has occurred. A new [[probability distribution]] (denoted by the conditional notation) is to be assigned on {''ω''} to reflect this. All events that are not in ''B'' will have null probability in the new distribution. For events in ''B'', two conditions must be met: the probability of ''B'' is one and the relative magnitudes of the probabilities must be preserved. The former is required by the [[Probability axioms|axioms of probability]], and the latter stems from the fact that the new probability measure has to be the analog of ''P'' in which the probability of ''B'' is
#<math>\omega \in B : P(\omega\mid B) = \alpha P(\omega)</math>
|