Explore: LHS Sampling

Suggest edits
Documentation > Explore > Direct Sampling

Contents

Latin Hypercube 🔗

High dimension spaces must be handled via specific methods of the literature, because otherwise cartesian product would be too memory consuming . OpenMOLE includes two of these methods: Sobol Sequence and Latin Hypercube Sampling, that can be passed as an argument to the DirectSampling task:


image/svg+xml Output Exploration Input Exploration Sensitivity Optimisation

Method scores
These two methods perform allright in terms of Input Space Exploration (which is normal as they were built for that extent), anyhow, they are superior to uniform sampling or grid sampling, but share the same intrinsic limitations. There is no special way of handling Stochasticity of the model, out of standard replications.
These methods are not expensive per se , it depends on the magnitude of the Input Space you want to be covered.



Latin Hypercube Sampling 🔗

The syntax of the LHS sampling is the following :

val i = Val[Double]
val j = Val[Double]
val values = Val[Array[Double]]

val my_LHS_sampling =
    LHS(
      100, // Number of points of the LHS
      i in (0.0, 10.0),
      j in (0.0, 5.0),
      values in Vector((0.0, 1.0), (0.0, 10.0), (5.0, 9.0)) // Generate part of the LHS sampling inside the array of values
    )

Usage in DirectSampling tasks 🔗

Once a sampling is defined, you can just add it to a DirectSampling task , under the sampling argument. For example, supposing you have already declared inputs, outputs, and a model taskcalled myModel, the sampling could be used like :
val myExploration = DirectSampling(
  evaluation = myModel ,
  sampling = my_sobol_sampling,
  aggregation = mean
)

myExploration