Probability bounds analysis: Difference between revisions

Content deleted Content added
m Disambiguate Interval to Interval (mathematics) using popups
Script-assisted style fixes: mainly date formats
Line 1:
{{Use dmy dates|date=April 2013}}
'''Probability bounds analysis (PBA)''' is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called [[probability box]]es, and constrain [[cumulative distribution function|cumulative probability distributions]] (rather than [[probability density function|densities]] or [[probability mass function|mass functions]]).
 
This [[upper and lower bounds|bounding]] approach permits analysts to make calculations without requiring overly precise assumptions about parameter values, dependence among variables, or even distribution shape. Probability bounds analysis is essentially a combination of the methods of standard [[interval analysis]] and classical [[probability theory]]. Probability bounds analysis gives the same answer as interval analysis does when only range information is available. It also gives the same answers as [[Monte Carlo simulation]] does when information is abundant enough to precisely specify input distributions and their dependencies. Thus, it is a generalization of both interval analysis and probability theory.
 
The diverse methods comprising probability bounds analysis provide algorithms to evaluate mathematical expressions when there is uncertainty about the input values, their dependencies, or even the form of mathematical expression itself. The calculations yield results that are guaranteed to enclose all possible distributions of the output variable if the input [[probability box|p-boxes]] were also sure to enclose their respective distributions. In some cases, a calculated p-box will also be best-possible in the sense that
the bounds could be no tighter without excluding some of the possible
distributions.
 
P-boxes are usually merely bounds on possible distributions. The bounds often also enclose distributions that are not themselves possible. For instance, the set of probability distributions that could result from adding random values without the independence assumption from two (precise) distributions is generally a proper [[subset]] of all the distributions enclosed by the p-box computed for the sum. That is, there are distributions within the output p-box that could not arise under any dependence between the two input distributions. The output p-box will, however, always contain all distributions that are possible, so long as the input p-boxes were sure to enclose their respective underlying distributions. This property often suffices for use in [[Probabilistic risk assessment|risk analysis]] and other fields requiring calculations under uncertainty.
 
==History of bounding probability==
Line 27 ⟶ 28:
| ___location = Amsterdam
| isbn = 0-444-11037-2 }}
</ref> Also dating from the latter half of the [[19th century]], the [[Chebyshev_inequality|inequality]] attributed to [[Chebyshev]] described bounds on a distribution when only the mean and
variance of the variable are known, and the related [[Markov_inequality|inequality]] attributed to [[Andrey Markov|Markov]] found bounds on a
positive variable when only the mean is known.
[[Henry E. Kyburg, Jr.|Kyburg]]<ref name="kyburg99">Kyburg, H.E., Jr. (1999). [http://www.sipta.org/documentation/interval_prob/kyburg.pdf Interval valued probabilities]. SIPTA Documention on Imprecise Probability.</ref> reviewed the history
of interval probabilities and traced the development of the critical ideas through the [[20th century]], including the important notion of incomparable probabilities favored by [[John Maynard Keynes|Keynes]].
Of particular note is [[Maurice René Fréchet|Fréchet]]'s derivation in the [[1930s]] of bounds on calculations involving total probabilities without
dependence assumptions. Bounding probabilities has continued to the
present day (e.g., Walley's theory of [[imprecise probability]]<ref name="WALLEY1991">{{cite book
Line 44 ⟶ 45:
 
The methods of probability bounds analysis that could be routinely used in
risk assessments were developed in the [[1980s]]. Hailperin<ref name=Hailperin86 /> described a computational scheme for bounding logical calculations extending the ideas of Boole. Yager<ref name=Yager>Yager, R.R. (1986). Arithmetic and other operations on Dempster–Shafer structures. ''International Journal of Man-machine Studies'' '''25''': 357–366.</ref> described the elementary procedures by which bounds on [[convolution of probability distributions|convolutions]] can be computed under an assumption of independence. At about the same time, Makarov<ref name=Makarov>Makarov, G.D. (1981). Estimates for the distribution function of a sum of two random variables when the marginal distributions are fixed. ''Theory of Probability and Its Applications'' '''26''': 803–806.</ref>, and independently, Rüschendorf<ref>Rüschendorf, L. (1982). Random variables with maximum sums. ''Advances in Applied Probability'' '''14''': 623–632.</ref> solved the problem, originally posed by [[Kolmogorov]], of how to find the upper and lower bounds for the probability distribution of a sum of random variables whose marginal distributions, but not their joint distribution, are known. Frank et al.<ref name=Franketal87>Frank, M.J., R.B. Nelsen and B. Schweizer (1987). Best-possible bounds for the distribution of a sum&mdash;asum—a problem of Kolmogorov. ''Probability Theory and Related Fields'' '''74''': 199–211.</ref> generalized the result of Makarov and expressed it in terms of [[Copula (probability theory)|copulas]]. Since that time, formulas and algorithms for sums have been generalized and extended to differences, products, quotients and other binary and unary functions under various dependence assumptions.<ref name=WilliamsonDowns>Williamson, R.C., and T. Downs (1990). Probabilistic arithmetic I: Numerical methods for calculating convolutions and dependency bounds. ''International Journal of Approximate Reasoning'' '''4''': 89–158.</ref><ref name=Fersonetal03>Ferson, S., V. Kreinovich, L. Ginzburg, D.S. Myers, and K. Sentz. (2003). [http://www.ramas.com/unabridged.zip ''Constructing Probability Boxes and Dempster–Shafer Structures'']. SAND2002-4015. Sandia National Laboratories, Albuquerque, NM.</ref><ref>Berleant, D. (1993). Automatically verified reasoning with both intervals and probability density functions. ''Interval Computations'' '''1993 (2) ''': 48–70.</ref><ref>Berleant, D., G. Anderson, and C. Goodman-Strauss (2008). Arithmetic on bounded families of distributions: a DEnv algorithm tutorial. Pages 183–210 in ''Knowledge Processing with Interval and Soft Computing'', edited by C. Hu, R.B. Kearfott, A. de Korvin and V. Kreinovich, Springer (ISBN 978-1-84800-325-5).</ref><ref name=BerleantGoodmanStrauss>Berleant, D., and C. Goodman-Strauss (1998). Bounding the results of arithmetic operations on random variables of unknown dependency using intervals. ''Reliable Computing'' '''4''': 147–165.</ref><ref name=Fersonetal04>Ferson, S., R. Nelsen, J. Hajagos, D. Berleant, J. Zhang, W.T. Tucker, L. Ginzburg and W.L. Oberkampf (2004). [http://www.ramas.com/depend.pdf ''Dependence in Probabilistic Modeling, Dempster–Shafer Theory, and Probability Bounds Analysis'']. Sandia National Laboratories, SAND2004-3072, Albuquerque, NM.</ref> <!--
 
It is possible to mix very different kinds of knowledge together in a bounding analysis. For instance,
 
In some cases, we may not know whether a quantity varies or is a fixed constant. Even if we know a quantity to be a constant, we may not know its value precisely. And, even if we know a quantity to be randomly varying, we may not know the statistical distribution that governs that variation, or the stochastic dependence it may have with other quantities.
 
In some cases, the shape or family of the distribution of a quantity may be known from mechanistic or physics-based arguments, but its parameters may be in doubt. In others cases, some summary statistical characteristics of a quantity may have been recorded in the scientific literature, but other details and the original data are unavailable so that we do not know the family of the statistical distribution even though we know some of its parameters. In some cases, there may be sample data available but the small sample may be size, or the data values may have non-negligible measurement uncertainty.
 
Further suppose that sparse data were used to form the 95% confidence limits for the distribution of ''C''. And the variable ''D'' is known to be well described by a precise distribution.
 
Probability bounds analysis includes the important special case of [[dependency bounds analysis]]<<__Williamson and Downs>> to compute bounds on the cumulative distribution of a function of random variables when only the marginal distributions of the variables are known, which is a problem originally posed by [[Kolmogorov]].
Line 58 ⟶ 59:
 
==Arithmetic expressions==
Arithmetic expressions involving operations such as additions, subtractions, multiplications, divisions, minima, maxima, powers, exponentials, logarithms, square roots, absolute values, etc., are commonly used in [[Probabilistic risk assessment|risk analyses]] and uncertainty modeling. Convolution is the operation of finding the probability distribution of a sum of independent random variables specified by probability distributions. We can extend the term to finding distributions of other mathematical functions (products, differences, quotients, and more complex functions) and other assumptions about the intervariable dependencies. There are convenient algorithms for computing these generalized convolutions under a variety of assumptions about the dependencies among the inputs.<ref name=Yager /><ref name=WilliamsonDowns /><ref name=Fersonetal03 /><ref name=Fersonetal04 />
 
===Mathematical details===
Let {{Unicode|&#x1D53B;}} denote the space of distribution functions on the [[real number]]s {{Unicode|ℝ}}, i.e., {{Unicode|&#x1D53B;}} = {''D'' | ''D'' : {{Unicode|ℝ}} → [0,1], ''D''(''x'') ≤ ''D''(''y'') whenever ''x'' < ''y'', for all ''x'', ''y'' [[Naive_set_theory#Sets.2C_membership_and_equality|&isin;]] {{Unicode|ℝ}}}, and let {{Unicode|&#x1D540;}} denote the set of real [[Interval (mathematics)|intervals]], i.e., {{Unicode|&#x1D540;}} = {''i'' | ''i'' = [''i''<sub>1</sub>, ''i''<sub>2</sub>], ''i''<sub>1</sub> ≤ ''i''<sub>2</sub>, ''i''<sub>1</sub>, ''i''<sub>2</sub> ∈ {{Unicode|ℝ}}}. Then a p-box is a quintuple {''{{overbar|F}}'', <u>''F''</u>, ''m'', ''v'', '''F'''}, where ''{{overbar|F}}'', <u>''F''</u> ∈ {{Unicode|&#x1D53B;}}, while ''m'', ''v'' ∈ {{Unicode|&#x1D540;}}, and '''F''' ⊆ {{Unicode|&#x1D53B;}}. This quintuple denotes the set of distribution functions ''F'' ∈ {{Unicode|&#x1D53B;}} matching the following constraints: <!--are the formulas only good for non-negative random variables?-->
 
If ''F'' is a [[distribution function]] and ''B'' is a [[p-box]], the notation ''F'' ∈ ''B'' means that ''F'' is an
Line 69 ⟶ 70:
[[Expected_value|E]](''F'') &isin; [''m''<sub>1</sub>,''m''<sub>2</sub>],
[[Variance|V]](''F'') &isin; [''v''<sub>1</sub>,''v''<sub>2</sub>], and
''F'' &isin; '''B'''. We sometimes say ''F'' is <em>inside</em> ''B''.
In some cases, there may be no information about the moments or distribution family other than what is
encoded in the two distribution functions that constitute the edges of the p-box. Then the quintuple
representing the p-box {''B''<sub>1</sub>, ''B''<sub>2</sub>, [&minus;&infin;,&infin;], [0,&infin;], 𝔻}
can be denoted more compactly as [''B''<sub>1</sub>, ''B''<sub>2</sub>]. This notation harkens to
that of intervals on the real line, except that the endpoints are distributions rather than points.
 
Line 79 ⟶ 80:
distribution function ''F'', that is, ''F'' = ''F''(''x''):{{Unicode|ℝ}}&rarr;[0,1]:x&rarr;Pr(''X''&le;''x'').
<!-- can I get the "mapsto" character without resorting to ugly <math> ? -->
Let us generalize the tilde notation for use with p-boxes. We will write
''X'' ~ ''B''
to mean that ''X'' is a random variable whose distribution function is unknown except that it is inside ''B''.
Line 88 ⟶ 89:
If ''X'' and ''Y'' are independent random variables with distributions ''F'' and ''G''
respectively, then ''X'' + ''Y'' = ''Z'' ~ ''H'' given by
:''H''(''z'') = <big>&int; </big><sub>z=x+y</sub> ''F''(''x'') ''G''(''y'') d''z'' = <big>&int; </big>{{su|''p''=&infin;|''b''=−&infin;}} ''F''(''x'') ''G''(''z &minus; x'') d''x'' = ''F * G''. <!-- the convolution asterisk looks a little better italicized -->
This operation is called a [[convolution]] on ''F'' and ''G''. The analogous operation on
p-boxes is straightforward for sums.
Suppose
Line 102 ⟶ 103:
and ''Y'' is actually easier than the problem assuming independence.
Makarov<ref name=Makarov/><ref name=Franketal87/><ref name=WilliamsonDowns/> showed that
:''Z'' ~ <big>[ sup</big><sub>x+y=z</sub> max(''F''(''x'') + ''G''(''y'') &minus; 1, 0), <big>inf</big><sub>x+y=z</sub> min(''F''(''x'') + ''G''(''y''), 1) <big>]</big>.
 
These bounds are implied by the [[copula_(probability_theory)#Fr.C3.A9chet.E2.80.93Hoeffding_copula_bounds|Fréchet–Hoeffding]] [[copula (probability theory)|copula]] bounds. The problem can also be solved using the methods of [[mathematical programming]]<ref name=BerleantGoodmanStrauss />.
 
The convolution under the intermediate assumption that ''X'' and ''Y'' have [[positive quadrant dependence|positive dependence]] is likewise easy to compute, as is the convolution under the extreme assumptions of [[Comonotonicity|perfect positive]] or [[countermonotonicity|perfect negative]] dependency between ''X'' and ''Y''.<ref name=Fersonetal04 />
 
Generalized convolutions for other operations such as subtraction, multiplication, division, etc., can be derived using transformations. For instance, p-box subtraction ''A'' &minus; ''B'' can be defined as ''A'' + (&minus;''B''), where the negative of a p-box ''B''=[''B''<sub>1</sub>, ''B''<sub>2</sub>] is
[''B''<sub>2</sub>(&minus;''x''), ''B''<sub>1</sub>(&minus;''x'')].
<!--
 
Line 121 ⟶ 122:
cumulative distribution function is convex on (, 0.1] and concave on [0.1, )
 
That is, ''A'' denotes all the probability distribution functions of normally distributed random variables whose mean is between 0.5 and 0.6, and whose variance is between 0.001 and 0.01. Likewise ''B'' denotes all probability distributions functions ranging between 0 and 1 whose mean and mode are both 0.1.
 
''a'' = normal([0.3,0.4], sqrt([0.001,.01]))
Line 137 ⟶ 138:
 
What can be inferred about the sum of these uncertain numbers depends on the assumptions about the stochastic dependence between the quantities.
If, for instance, they can be assumed to be independent, or related according to some other specific [[copula (probability theory)|copula]] or dependence function, the bounds on the sum will be tighter than they would be if their dependence were imprecisely specified (e.g., that they are positively related, or that their interaction can be characterized by a particular correlation coefficient). Making no assumption whatever about the dependencies among the quantities leads to the broadest bounds.
can be computed in different ways as a result of assumptions made about the dependence among the quantities.
 
In this example, we compute bounds on the sum from only partial information about each of the respective random variables. There is no way to do these calculations with a sampling strategy such as Monte Carlo simulation.
 
Shown below are the bounds on each of the four inputs and bounds on the sum, both with an assumption of independence and without any assumption about the dependence among the variables. The dotted curves represent the inputs and answers that might have been used in a traditional probabilistic assessment that did not acknowledge the uncertainty about the distributions and dependencies. Compare them with the solid edges of the p-boxes to quantify how much the tail risks would have been underestimated.
 
When the quantities are independent and their p-boxes are degenerate so they define particular distribution functions, the result of the probability bounds analysis is the same as would be obtained in a traditional probabilistic convolution such as is commonly implemented with Monte Carlo simulation.
Line 148 ⟶ 149:
A+B+C+D A+B+C+D
 
Figure 7. Example calculation of a sum of four addends characterized by p-boxes.
a=lognormal1([.5,.6], sqrt([.001,.01]))
b = minmaxmode(0, 1, .3)
c= hist(0,1,.2, .5, .6, .7, .75, .8)
d=uniform(0,1)
e =a |+| b
The table below lists the summary statistical measures yielded by three analyses of this hypothetical calculation. The second column gives the results that might be obtained by a standard Monte Carlo analysis under an independence assumption (the dotted lines in the figure above). The third and fourth columns give results from probability bounding analyses, either with or without an assumption of independence.
 
Summary Monte Carlo Independence General
Line 162 ⟶ 163:
variance 0.135 [ 0.086, 0.31] [ 0, 0.90]
 
Notice that, while the Monte Carlo simulation produces point estimates, the bounding analyses yield intervals for the various measures. The intervals represent sure bounds on the respective statistics. They reveal just how unsure the answers given by the Monte Carlo simulation actually were. If we look in the last column with no assumption, for instance, we see that the variance might actually be over six times larger than the Monte Carlo simulation estimates.
 
====Condensation====
Line 181 ⟶ 182:
<dt>Use of dependence operators
<dd>
<dt>Rearranging to reduce repeated uncertainties<dd>The dependency problem can be eliminated by replacing an expression to be evaluated by an algebraically equivalent expression in which no variable appears more than once. For instance, in an expression such as ''a''/''x'' +''b''/''x''+ ''c''/''x'' + ''d''/''x'' where ''x'' denotes an uncertain quantity, the perfect dependence among the various instantiations of ''x'' is difficult to account for in the computation. But when this expression is replaced by the equivalent expression
(''a''+''b''+''c''+''d'')/''x'', the dependence problem disappears because the rearranged expression has no repeated variables.
 
Likewise, the
''x''<sup>2</sup>&minus;''x''
can be replaced
by the algebraically equivalent expression
(''x''&minus;1−1/2)<sup>2</sup>&minus;1−1/4.
 
<dt>Subinterval reconstitution
Line 204 ⟶ 205:
 
==Logical expressions==
Logical or [[Boolean_function|Boolean expressions]] involving [[logical_conjunction|conjunctions]] ([[AND_gate|AND]] operations), [[logical_disjunction|disjunctions]] ([[OR_gate|OR]] operations), exclusive disjunctions, equivalences, conditionals, etc. arise in the analysis of fault trees and event trees common in risk assessments. If the probabilities of events are characterized by intervals, as suggested by [[George Boole|Boole]]<ref name="BOOLE1854" /> and [[John Maynard Keynes|Keynes]]<ref name="kyburg99" /> among others, these binary operations are straightforward to evaluate. For example, if the probability of an event A is in the interval P(A) = ''a'' = [0.2, 0.25], and the probability of the event B is in P(B) = ''b'' = [0.1, 0.3], then the probability of the [[logical conjunction|conjunction]] is surely in the interval
: &nbsp;&nbsp;P(A & B) = ''a'' &times; ''b''
:::: = [0.2, 0.25] &times; [0.1, 0.3]
:::: = [0.2 &times; 0.1, 0.25 &times; 0.3]
:::: = [0.02, 0.075]
so long as A and B can be assumed to be independent events. If they are not independent, we can still bound the conjunction using the classical [[Frechet inequalities|Fr&eacute;chet inequality]]. In this case, we can infer at least that the probability of the joint event A & B is surely within the interval
: &nbsp;&nbsp;P(A & B) = env(max(0, ''a''+''b''&minus;1−1), min(''a'', ''b''))
:::: = env(max(0, [0.2, 0.25]+[0.1, 0.3]&minus;1−1), min([0.2, 0.25], [0.1, 0.3]))
:::: = env([max(0, 0.2+0.1&minus;11–1), max(0, 0.25+0.3&minus;13–1)], [min(0.2,0.1), min(0.25, 0.3)])
:::: = env([0,0], [0.1, 0.25])
:::: = [0, 0.25]
where env([''x''<sub>1</sub>,''x''<sub>2</sub>], [''y''<sub>1</sub>,''y''<sub>2</sub>]) is [min(''x''<sub>1</sub>,''y''<sub>1</sub>), max(''x''<sub>2</sub>,''y''<sub>2</sub>)]. Likewise, the probability of the [[logical disjunction|disjunction]] is surely in the interval
: &nbsp;&nbsp;P(A v B) = ''a'' + ''b'' &minus; ''a'' &times; ''b'' = 1 &minus; (1 &minus; ''a'') &times; (1 &minus; ''b'')
:::: = 1 &minus; (1 &minus; [0.2, 0.25]) &times; (1 &minus; [0.1, 0.3])
:::: = 1 &minus; [0.75, 0.8] &times; [0.7, 0.9]
:::: = 1 &minus; [0.525, 0.72]
:::: = [0.28, 0.475]
if A and B are independent events. If they are not independent, the Fr&eacute;chet inequality bounds the disjunction
: &nbsp;&nbsp;P(A v B) = env(max(''a'', ''b''), min(1, ''a'' + ''b''))
:::: = env(max([0.2, 0.25], [0.1, 0.3]), min(1, [0.2, 0.25] + [0.1, 0.3]))
Line 227 ⟶ 228:
:::: = [0.2, 0.55].
 
It is also possible to compute interval bounds on the conjunction or disjunction under other assumptions about the dependence between A and B. For instance, one might assume they are positively dependent, in which case the resulting interval is not as tight as the answer assuming independence but tighter than the answer given by the Fr&eacute;chet inequality. Comparable calculations are used for for other logical functions such as negation, exclusive disjunction, etc. When the Boolean expression to be evaluated becomes complex, it may be necessary to evaluate it using the methods of mathematical programming<ref name=Hailperin86 /> to get best-possible bounds on the expression. If the probabilities of the events are characterized by probability distributions or p-boxes rather than intervals, then analogous calculations can be done to obtain distributional or p-box results characterizing the probability of the top event. <!--
 
Prob(A and B) = Prob(A) * Prob(B).
Line 235 ⟶ 236:
 
Operation Formula
conjunction [ max(0, a+b–1), min(a, b) ],
disjunction [ max(a, b), min(1, a+b) ],
 
a = [0.2, 0.25]
Line 254 ⟶ 255:
 
==Magnitude comparisons==
The probability that an uncertain number represented by a p-box ''D'' is less than zero is the interval Pr(''D'' < 0) = [<u>''F</u>''(0), ''F̅''(0)], where ''F̅''(0) is the left bound of the probability box ''D'' and <u>''F''</u>(0) is its right bound, both evaluated at zero. Two uncertain numbers represented by probability boxes may then be compared for numerical magnitude with the following encodings:
:''A'' < ''B'' = Pr(''A'' &minus; ''B'' < 0),
:''A'' > ''B'' = Pr(''B'' &minus; ''A'' < 0),
:''A'' &le; ''B'' = Pr(''A'' &minus; ''B'' &le; 0), and
:''A'' &ge; ''B'' = Pr(''B'' &minus; ''A'' &le; 0).
Thus the probability that ''A'' is less than ''B'' is the same as the probability that their difference is less than zero, and this probability can be said to be the value of the expression ''A'' < ''B''.
 
Line 264 ⟶ 265:
 
==Sampling-based computation==
Some analysts<ref>Alvarez, D. A., 2006. On the calculation of the bounds of probability of events using infinite random sets. ''International Journal of Approximate Reasoning'' '''43''': 241–267.</ref><ref>Baraldi, P., Popescu, I. C., Zio, E., 2008. Predicting the time to failure of a randomly degrading component by a hybrid Monte Carlo and possibilistic method. ''IEEE Proc. International Conference on Prognostics and Health Management''.</ref><ref>Batarseh, O. G., Wang, Y., 2008. Reliable simulation with input uncertainties using an interval-based approach. ''IEEE Proc. Winter Simulation Conference''.</ref><ref>Roy, Christopher J., and Michael S. Balch (2012). A holistic approach to uncertainty quantification with application to supersonic nozzle thrust. ''International Journal for Uncertainty Quantification'' [in press].</ref><ref>Zhang, H., Mullen, R. L., Muhanna, R. L. (2010). Interval Monte Carlo methods for structural reliability. ''Structural Safety'' '''32''': 183–190.</ref><ref>Zhang, H., Dai, H., Beer, M., Wang, W. (2012). Structural reliability analysis on the basis of small samples: an interval quasi-Monte Carlo method. ''Mechanical Systems and Signal Processing'' [in press].</ref> use sampling-based approaches to computing probability bounds, including [[Monte Carlo simulation]], [[Latin hypercube]] methods or [[importance sampling]]. These approaches cannot assure mathematical rigor in the result because such simulation methods are approximations, although their performance can generally be improved simply by increasing the number of replications in the simulation. Thus, unlike the analytical theorems or methods based on mathematical programming, sampling-based calculations usually cannot produce [[verified computing|verified computations]]. However, sampling-based methods can be very useful in addressing a variety of problems which are computationally [[NP-hard|difficult]] to solve analytically or even to rigorously bound. One important example is the use of Cauchy-deviate sampling to avoid the [[curse of dimensionality]] in propagating [[Interval (mathematics)|interval]] uncertainty through high-dimensional problems.<ref>Trejo, R., Kreinovich, V. (2001). [http://www.cs.utep.edu/vladik/2000/tr00-17.pdf Error estimations for indirect measurements: randomized vs. deterministic algorithms for ‘black-box’ programs]. ''Handbook on Randomized Computing'', S. Rajasekaran, P. Pardalos, J. Reif, and J. Rolim (eds.), Kluwer, 673–729.</ref>
 
==Relationship to other uncertainty propagation approaches==
PBA belongs to a class of methods that use [[imprecise probability|imprecise probabilities]] to simultaneously represent [[Uncertainty_quantification|aleatoric and epistemic uncertainties]]. PBA is a generalization of both [[interval analysis]] and probabilistic [[convolution_of_probability_distributions|convolution]] such as is commonly implemented with [[Monte Carlo simulation]]. PBA is also closely related to [[robust Bayes analysis]], which is sometimes called [[Bayesian sensitivity analysis]]. PBA is an alternative to [[second-order Monte Carlo simulation]].
 
==Applications==
Line 280 ⟶ 281:
Value of information; dilation
===Bayesian inference of p-boxes===
Vicky Montgomery’sMontgomery's dissertation
===Analysis of data consisting of intervals===
===Validation===
Line 288 ⟶ 289:
Doubt about the function that combines inputs
==Limitations and drawbacks==
Loses modal information; Cedric Baudrit’sBaudrit's dissertation
==Generalizations==
Destercke’sDestercke's dissertation
-->