Computer experiment: Difference between revisions

Content deleted Content added
Dthomsen8 (talk | contribs)
m clean up, typo(s) fixed: 1980's → 1980s, , → , using AWB
Line 13:
 
==Computer simulation modeling==
Modeling of computer experiments typically uses a Bayesian framework. [[Bayesian statistics]] is an interpretation of the field of [[statistics]] where which all evidence about the true state of the world is explicitly expressed in the form of [[probabilities]]. In the realm of computer experiments, the Bayesian interpretation would imply we must form a [[prior distribution]] that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980's1980s and is nicely summarized by Sacks et al. (1989) [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177012413]. While the Bayesian approach is widely used, [[frequentist]] approaches have been recently discussed [http://www2.isye.gatech.edu/~jeffwu/publications/calibration-may1.pdf].
 
The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are coefficients in the underlying model, [[initial conditions]] and [[Forcing function (differential equations)|forcing functions]]. It is natural to see the simulation as a deterministic function that maps these ''inputs'' into a collection of ''outputs''. On the basis of seeing our simulator this way, it is common to refer to the collection of inputs as <math>x</math>, the computer simulation itself as <math>f</math>, and the resulting output as <math>f(x)</math>. Both <math>x</math> and <math>f(x)</math> are vector quantities, and they can be very large collections of values, often indexed by space, or by time, or by both space and time.
 
Although <math>f(\cdot)</math> is known in principle, in practice this is not the case. Many simulators comprise tens of thousands of lines of high-level computer code, which is not accessible to intuition. For some simulations, such as climate models, evaluatution of the output for a single set of inputs can require millions of computer hours [http://amstat.tandfonline.com/doi/abs/10.1198/TECH.2009.0015#.UbixC_nFWHQ].
 
===Gaussian process prior===
Line 25:
 
==Design of computer experiments==
The design of computer experiments has considerable differences from [[design of experiments]] for parametric models. Since a Gaussian process prior has an infinite dimensional representation, the concepts of A and D criteria (see [[Optimal design]]) , which focus on reducing the error in the parameters, cannot be used. Replications would also be wasteful in cases when the computer simulation has no error. Criteria that are used to determine a good experimental design include integrated mean squared prediction error [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177012413] and distance based criteria [http://www.sciencedirect.com/science/article/pii/037837589090122B].
 
Popular strategies for design include [[latin hypercube sampling]] and [[low discrepancy sequences]].
 
===Problems with massive sample sizes===
Unlike physical experiments, it is not uncommon for computer experiments to have thousands of different input combinations. Because the standard inference requires [[inversion| matrix inversion]] of a square matrix of the size of the number of samples (<math>n</math>), the cost grows on the <math> \mathcal{O} (n^3) </math>. Matrix inversion of large, dense matrices can also cause induce numerical inaccuracies. Currently, this problem is avoided by using approximation methods, e.g. [http://www.stat.wisc.edu/~zhiguang/Multistep_AOS.pdf].
 
==See also==