Content deleted Content added
m →Methodology: Removed questioning of the improvement and the source of Campolongo. This paper presents the method, it is a straight forward improvement which prevents negative data from skewing the mean. |
clean up; add MI; using AWB |
||
Line 1:
{{Multiple issues|
{{Underlinked|date=December 2012}}
{{Primary sources|date=January 2010}}
}}
The '''elementary effects (EE) method''' is the most used{{Citation needed|date=January 2010}} screening method in [[sensitivity analysis]]. It is applied to identify non-influential inputs for a computationally costly [[mathematical model]] or for a model with a large number of inputs, where the costs of estimating other sensitivity analysis measures such as the variance-based measures is not affordable. Like all screening, the EE method provides qualitative sensitivity analysis measures, i.e. measures which allow the identification of non-influential inputs or which allow to rank the input factors in order of importance, but do not quantify exactly the relative importance of the inputs.
Line 7 ⟶ 10:
To exemplify the EE method, let us assume to consider a mathematical model with <math> k </math> input factors. Let <math> Y </math> be the output of interest (a scalar for simplicity):
: <math> Y =
The original EE method of Morris <ref>Morris, M. D. (1991). Factorial sampling plans for preliminary computational experiments. ''Technometrics'', '''33''', 161–174.</ref> provides two sensitivity measures for each input factor:
* the measure <math> \mu </math>, assessing the overall importance of an input factor on the model output;
Line 21 ⟶ 24:
Along each trajectory the so-called ''elementary effect'' for each input factor is defined as:
: <math> d_i(X) = \frac{Y(X_1, \ldots ,X_{i-1}, X_i + \Delta, X_{i+1}, \ldots, X_k ) - Y( \mathbf X)}{\Delta}
where <math> \mathbf{X} = (X_1, X_2, ... X_k)</math> is any selected value in <math> \Omega </math> such that the transformed point is still in <math> \Omega </math> for each index <math> i=1,\ldots, k. </math>
<math> r </math> elementary effects are estimated for each input <math> d_i\left(X^{(1)} \right), d_i\left( X^{(2)} \right), \ldots, d_i\left( X^{(r)} \right) </math> by randomly sampling <math> r </math> points <math> X^{(1)}, X^{(2)}, \ldots , X^{(r)}</math>.
Usually <math> r </math> ~ 4-10, depending on the number of input factors, on the computational cost of the model and on the choice of the number of levels <math> p </math>, since a high number of levels to be explored needs to be balanced by a high number of trajectories, in order to obtain an exploratory sample. It is demonstrated that a convenient choice for the parameters <math> p </math> and <math> \Delta </math> is <math> p </math> even and <math> \Delta </math> equal to <math>
p/[2(p-1)]</math>, as this ensures equal probability of sampling in the input space.
Line 34 ⟶ 37:
The two measures <math> \mu </math> and <math> \sigma </math> are defined as the mean and the standard deviation of the distribution of the elementary effects of each input:<br />
: <math> \mu_i = \frac{1}{r} \sum_{j=1}^r d_i \left( X^{(j)} \right) </math>,
: <math> \sigma_i = \sqrt{ \frac{1}{(r-1)} \sum_{j=1}^r \left( d_i \left( X^{(j)} \right) - \mu_i
These two measures need to be read together (e.g. on a two-dimensional graph) in order to rank input factors in order of importance and identify those inputs which do not influence the output variability. Low values of both <math> \mu </math> and <math> \sigma </math> correspond to a non-influent input.
|