Latin hypercube sampling: Difference between revisions

Content deleted Content added
Removed erroneous article (in the context of the majority usage in the article).
m change to lowercase naming to be consistent and simpler to read
Line 26:
In two dimensions the difference between random sampling, Latin Hypercube sampling, and orthogonal sampling can be explained as follows:
#In '''random sampling''' new sample points are generated without taking into account the previously generated sample points. One does not necessarily need to know beforehand how many sample points are needed.
#In '''Latinlatin Hypercubehypercube sampling''' one must first decide how many sample points to use and for each sample point remember in which row and column the sample point was taken. Such configuration is similar to having N [[Rook_(chess)|rooks]] on a chess board without threatening each other.
#In '''Orthogonalorthogonal sampling''', the sample space is divided into equally probable subspaces. All sample points are then chosen simultaneously making sure that the total set of sample points is a Latin Hypercube sample and that each subspace is sampled with the same density.
 
Thus, orthogonal sampling ensures that the set of random numbers is a very good representative of the real variability, LHS ensures that the set of random numbers is representative of the real variability whereas traditional random sampling (sometimes called brute force) is just a set of random numbers without any guarantees.