Latin hypercube sampling: Difference between revisions

Content deleted Content added
Saittam (talk | contribs)
'''equally probable intervals''' and '''orthogonal sampling'''
Saittam (talk | contribs)
m layman explanation
Line 1:
The [[statistical]] method '''Latin hypercube sampling''' (LHS) was developed and invented by [[Ronald L. Iman]], J. C. Helton, and J. E. Campbell, et al to generate a distribution of plausible collections of parameter values from a [[multidimensional distribution]].
 
Their paper ''An approach to sensitivity analysis of computer models, Part I. Introduction, input variable selection and preliminary variable assessment.'' appeared in the Journal of Quality Technology in [[1981]].
Line 12:
 
In two dimensions the difference between random sampling, Latin Hypercube sampling and orthogonal sampling can be explained as follows: I) In random sampling new sample points are generated without taking into account the previously generated sample points. One does thus not necessarily need to know beforehand how many sample points that are needed. II) In '''Latin Hypercube''' sampling one must first decide how many sample points to use and for each sample point remember in which row and column the sample point was taken. III) In '''Orthogonal''' Sampling, the sample space is divided into equally probable subspaces, the figure above showing four subspaces. All sample points are then chosen simultaneously making sure that the total ensemble of sample points is a Latin Hypercube sample and that each subspace is sampled with the same density.
 
To put it in layman terms, orthogonal sampling ensures that the ensemble of random numbers is a very good representative of the real variability, LHS sampling ensures that the ensemble of random numbers is a good representative of the real variability whereas traditional random sampling (sometimes called brute force) is just an ensemble of random numbers without any guarantees.
 
[[Category:Statistics]]