Elementary effects method: Difference between revisions

Content deleted Content added
Line 9:
 
: <math> Y = f(X_1, X_2, ... X_k).</math>
 
 
The original EE method of Morris <ref>Morris, M. D. (1991). Factorial sampling plans for preliminary computational experiments. ''Technometrics'', '''33''', 161–174.</ref> provides two sensitivity measures for each input factor:
Line 19 ⟶ 18:
In this design, each model input is assumed to vary across <math>p</math> selected levels in the space of the input factors. The region of experimentation <math>\Omega</math> is thus a <math>k</math>-dimensional <math>p</math>-level grid.
 
[[File:EE_trajectory.jpg|thumb | right | 300px250px | Figure 1: Example of a trajectory in 3 dimensions.]]
 
Each trajectory is composed of <math>(k+1)</math> points since input factors move one by one of a step <math> \Delta </math> in <math>\{0, 1/(p-1), ... , 1-1/(p-1), 1\}</math> while all the others remain fixed. Figure 1 shows an example of trajectory in three dimensions.
 
Along each trajectory the so called ''elementary effect'' for each input factor is defined as:<br />
 
: <math> d_i(X) = \frac{Y(X_1, \ldots ,X_{i-1}, X_i + \Delta, X_{i-+1}, \ldots, X_k ) - Y( \mathbf X)}{\Delta} </math>,
 
where <math> \mathbf{X} = (X_1, X_2, ... X_k)</math> is any selected value in <math> \Omega </math> such that the transformed point is still in <math> \Omega </math> for each index <math> i=1,\ldots, k. </math>
 
 
<math> r </math> elementary effects are estimated for each input <math> d_i\left(X^{(1)} \right), d_i\left( X^{(2)} \right), \ldots, d_i\left( X^{(r)}) \right) </math> by randomly sampling <math> r </math> points <math> X^{(1)}, X^{(2)}, \ldots , X^{(r)}</math>.
Usually <math> r </math> ~ 4,-10, depending on the number of input factors, on the computational cost of the model and on the choice of the number of levels <math> p </math>, since a high number of levels to be explored needs to be balanced by a high number of trajectories, in order to obtain an exploratory sample.
It is demonstrated that a convenient choice for the parameters <math> p </math> and <math> \Delta </math> is <math> p </math> even and <math> \Delta </math> equal to <math>
p/[2(p-1)]</math>, as this ensures equal probability of sampling in the input space.
Line 42 ⟶ 41:
The two measures <math> \mu </math> and <math> \sigma </math> are defined as the mean and the standard deviation of the distribution of the elementary effects of each input:<br />
: <math> \mu_i = \frac{1}{r} \sum_{j=1}^r d_i \left( X^{(j)} \right) </math>,<br />
: <math> \sigma_i = \sqrt{ \frac{1}{(r-1)} \sum_{j=1}^r \left( d_i \left( X^{(j)} \right) - \mu_i \right)^2} </math>.<br />
 
These two measures need to be read together (e.g. on a two-dimensional graph, see Figure 2) in order to rank input factors in order of importance and identify those inputs which do not influence the output variability. Low values of both <math> \mu </math> and <math> \sigma </math> correspond to a non-influent input.<br />