Content deleted Content added
mNo edit summary |
|||
Line 15:
Modeling of computer experiments typically uses a Bayesian framework. [[Bayesian statistics]] is an interpretation of the field of [[statistics]] where which all evidence about the true state of the world is explicitly expressed in the form of [[probabilities]]. In the realm of computer experiments, the Bayesian interpretation would imply we must form a [[prior distribution]] that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980's and is nicely summarized by Sacks et al. (1989) [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177012413]. While the Bayesian approach is widely used, [[frequentist]] approaches have been recently discussed [http://www2.isye.gatech.edu/~jeffwu/publications/calibration-may1.pdf].
The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are
Although <math>f(\cdot)</math> is known in principle, in practice this is not the case. Many simulators comprise tens of thousands of lines of high-level computer code, which is not accessible to intuition. For some simulations, such as climate models, evaluatution of the output for a single set of inputs can require millions of computer hours [http://amstat.tandfonline.com/doi/abs/10.1198/TECH.2009.0015#.UbixC_nFWHQ].
|