Content deleted Content added
No edit summary |
No edit summary |
||
Line 29:
| isbn = 0-444-11037-2 }}
</ref> Also dating from the latter half of the [[19th century]], the [[Chebyshev_inequality|inequality]] attributed to [[Chebyshev]] described bounds on a distribution when only the mean and
variance of the variable are known, and the related [[Markov_inequality|inequality]] attributed to [[
positive variable when only the mean is known.
[[
of interval probabilities and traced the development of the critical ideas through the [[20th century]], including the important notion of incomparable probabilities favored by [[Keynes]].
Of particular note is [[Maurice René Fréchet|Fréchet]]'s derivation in the [[1930s]] of bounds on calculations involving total probabilities without
Line 45:
The methods of probability bounds analysis that could be routinely used in
risk assessments were developed in the [[1980s]]. Hailperin<ref name=Hailperin86 /> described a computational scheme for bounding logical calculations extending the ideas of Boole. Yager<ref name=Yager>Yager, R.R. (1986). Arithmetic and other operations on Dempster–Shafer structures. ''International Journal of Man-machine Studies'' '''25''': 357–366.</ref> described the elementary procedures by which bounds on [[convolution of probability distributions|convolutions]] can be computed under an assumption of independence. At about the same time, Makarov<ref name=Makarov>Makarov, G.D. (1981). Estimates for the distribution function of a sum of two random variables when the marginal distributions are fixed. ''Theory of Probability and Its Applications'' '''26''': 803–806.</ref>, and independently, Rüschendorf<ref>Rüschendorf, L. (1982). Random variables with maximum sums. ''Advances in Applied Probability'' '''14''': 623–632.</ref> solved the problem, originally posed by [[Kolmogorov]], of how to find the upper and lower bounds for the probability distribution of a sum of random variables whose marginal distributions, but not their joint distribution, are known. Frank et al.<ref name=Franketal87>Frank, M.J., R.B. Nelsen and B. Schweizer (1987). Best-possible bounds for the distribution of a sum—a problem of Kolmogorov. ''Probability Theory and Related Fields'' '''74''': 199–211.</ref> generalized the result of Makarov and expressed it in terms of [[
It is possible to mix very different kinds of knowledge together in a bounding analysis. For instance,
Line 105:
:''Z'' ~ <big>[ sup</big><sub>x+y=z</sub> max(''F''(''x'') + ''G''(''y'') − 1, 0), <big>inf</big><sub>x+y=z</sub> min(''F''(''x'') + ''G''(''y''), 1) <big>]</big>.
These bounds are implied by the [[
The convolution under the intermediate assumption that ''X'' and ''Y'' have [[positive quadrant dependence|positive dependence]] is likewise easy to compute, as is the convolution under the extreme assumptions of [[Comonotonicity|perfect positive]] or [[countermonotonicity|perfect negative]] dependency between ''X'' and ''Y''.<ref name=Fersonetal04 />
|