ModelsEnvironments
Explore with Other DoEs

Design of Experiments (DoE) is the art of setting up an experimentation. In a model simulation context, it boils down to declare the inputs under study (most of the time, they're parameters) and the values they will take, for a batch of several simulations, with the idea of revealing a property of the model (e.g. sensitivity). Even if there are several state-of-the-art DoE methods implemented in OpenMOLE, we recommend to focus on OpenMOLE new methods: PSE, and Calibration and Profiles which have been thought to improve the drawbacks of the classical methods.

Your model inputs can be sampled in the traditional way, by using grid (or regular) sampling, by sampling uniformly inside their domain.
For higher dimension input space, specific statistics techniques like Latin Hypercube Sampling and SobolSequence are available.
If you want to use design of experiments of your own you may also want to provide a csv file with your samples to OpenMOLE.
By defining your own exploration task on several types of input, you will be able to highlight some of your model inner properties like those revealed by sensitivity analysis, as shown in a toy example on a real world example

Grid Sampling, Uniform Distribution



For a reasonable number of dimension and discretisation quanta (steps) values, complete sampling (or grid sampling) consists in producing every combination of the inputs possibles values, given their bounds and quanta of discretisation.

image/svg+xml Output Exploration Input Exploration Sensitivity Optimisation

Method scores:
Regular sampling or Uniform Sampling are quite good for a first Input Space exploration when you don't know anything about its structure yet. Since it samples from the input space, the collected values from the model executions will reveal the output values obtained for "evenly spaced" inputs. Sure it's not perfect, but still , it gives a little bit of insights about model sensitivity (as input values vary within their domain) and if the output are fitness, it may present a little bit of optimization information (as the zone in which the fitness could be minimized).
The sampling does not reveal anything about the output space structure, as there is no reason than evenly spaced inputs lead to evenly spaced outputs. Basic sampling is hampered by input space dimensionality as high dimension spaces need a lot of samples to be covered, as well as a lot of memory to store them.


Complete sampling is declared as an ExplorationTask, where the bounds and discretisation quantum of each input to vary are declared using the following syntax :


val input_i = Val[Int]
val input_j = Val[Double]

val exploration =
  ExplorationTask (
    (input_i in (0 to 10 by 2)) x
    (input_j in (0.0 to 5.0 by 0.5))
  )
In this example ExplorationTask, two inputs are varying at the same time, using the cartesian product, see the Variation of several inputs to get more details on the input combination.


Sampling can also be performed via a UniformDistribution(maximum), that generates values uniformly distributed between zero and the maximum provided argument. Custom domains can be defined using transformations, as in the example below that generates values between -10 and + 10.
val my_input = Val[Double]
val exploration = ExplorationTask(
    (my_input in (UniformDistribution[Double](max=20) take 100)).map(x => x -10)
)

CSV Sampling


You can inject your own sampling in OpenMOLE through a CSV file. Considering a CSV file like:
coldD,  colFileName,    i
0.7,    fic1,           8
0.9,    fic2,           19
0.8,    fic2,           19
The corresponding CSVSampling is:
val i = Val[Int]
val d = Val[Double]
val f = Val[File]

//Only comma separated files with header are supported for now
val s = CSVSampling("/path/to/a/file.csv") set (
  columns += i,
  columns += ("colD", d),
  fileColumns += ("colFileName", "/path/of/the/base/dir/", f),
  // comma ',' is the default separator, but you can specify a different one using
  separator := ','
)

val exploration = ExplorationTask(s)

In this example the column i in the CSV file is mapped to the variable i of OpenMOLE. The column name colD is mapped to the variable d. The column named colFileName is appended to the base directory "/path/of/the/base/dir/" and used as a file in OpenMOLE. As a sampling, the CSVSampling can directly be injected in an ExplorationTask. It will generate a different task for each entry in the file.

Latin Hypercube, Sobol Sequence


High dimension spaces must be handled via specific methods of the literature, because otherwise cartesian product would be too memory consuming . OpenMOLE includes two of these methods: Sobol Sequence and Latin Hypercube Sampling , defined as specifications of the ExplorationTask:


image/svg+xml Output Exploration Input Exploration Sensitivity Optimisation

Method scores:
These two methods perform allright in terms of Input Space Exploration (which is normal as they were built for that extent), anyhow, they are superior to uniform sampling or grid sampling, but share the same intrinsic limitations. There is no special way of handling Stochasticity of the model, out of standard replications.
These methods are not expansive per se , it depends on the magnitude of the Input Space you want to be covered.


Latin Hypercube Sampling


val i = Val[Double]
val j = Val[Double]

val my_LHS_sampling =
  ExplorationTask (
    LHS(
      100, // Number of points of the LHS
      i in Range(0.0, 10.0),
      j in Range(0.0, 5.0)
    )
  )

Sobol Sequence


val i = Val[Double]
val j = Val[Double]

val my_sobol_sampling =
  ExplorationTask (
    SobolSampling(
      100, // Number of points
      i in Range(0.0, 10.0),
      j in Range(0.0, 5.0)
    )
  )

Exploration of several inputs


Exploration can be performed on several inputs domains, using the cartesian product operator: x.

The basic syntax to explore 2 inputs (i.e. every combination of 2 inputs values) is
val i = Val[Int]
val j = Val[Double]

val exploration =
  ExplorationTask (
    (i in (0 to 10 by 2)) x
    (j in (0.0 to 5.0 by 0.5))
  )

Different Types of inputs


The cartesian product operator can handle different data types :
val i = Val[Int]
val j = Val[Double]
val k = Val[String]
val l = Val[Long]
val m = Val[File]

val exploration =
  ExplorationTask (
    (i in (0 to 10 by 2)) x
    (j in (0.0 to 5.0 by 0.5)) x
    (k in List("Leonardo", "Donatello", "Raphaƫl", "Michelangelo")) x
    (l in (UniformDistribution[Long]() take 10)) x
    (m in (workDirectory / "dir").files().filter(f => f.getName.startsWith("exp") && f.getName.endsWith(".csv")))
  )
 
This task performs every combination between the 5 inputs i,j,k,l and m. It can handle several types of inputs : Integer (i) , Double (j), Strings (k), Long (l), Files (m).
The UniformDistribution[T]() take 10 is a uniform sampling of 10 numbers of the Long type, taken in the [Long.MIN_VALUE; Long.MAX_VALUE] domain of the Long native type.
Files are explored as items of a list. The items are gathered by the files() function applied on the dir directory, optionally filtered with any String => Boolean functions such as contains(), startswith(), endswith() (see the Java Class String Documentation for more details)
If your input is one file among many, or a line among a CSV file, use the CSVSampling task and FileSampling task

Sensitivity Analysis


Typical Sensitivity analysis (in a simulation experiment context) is the study of how the variation of an input affect the output(s) of a model. Basically it
image/svg+xml t o(t) i j k Model
Run


Prerequisites

An embedded model in OpenMOLE (see Step 1 : Model)

Variation of one input

The most simple case to consider is to observe the effect of a single input variation on a single output. This is achieved by using an exploration task , who will generate the sequence of values of an input, according to its boundaries values and a discretisation step.
val my_input = Val[Double]

val exploration =
 ExplorationTask(
   (my_input in (0.0 to 10.0 by 0.5))
  )


Real world Example


the Fire.nlogo model is a simple, one-parameter, simulation model that simulates fire propagation. This model features a threshold value in its unique parameter domain, below which fire fails to burn the majority of the forest, and beyond which fire propagates and burn most of it. We will perform sensitivity analysis to make this change of regime appear. The Fire model integration has been covered in the NetLogo page of the Model section, so we take it from here. The former script was already performing a sensitivity Analysis, by varying density from 20 to 80 by step of 10, with 10 replication for each (seed is taken 10 times).
In our case, the quantum of 10 percent is quite coarsed, so we make it 1 percent :
val exploration =
  ExplorationTask(
    (density in (20.0 to 80.0 by 1.0)) x
    (seed in (UniformDistribution[Int]() take 10))
  )
  



This is the resulting scatterplot of Number of burned trees according to density varying from 20% to 80% by 1% steps.

image/svg+xml Burned trees according to intial density density burned trees

The change of regime clearly appears between 50% and 75% percent density, so we are going to take a closer look at this domain: we change the exploration task to have the density taken from 50% to 75% by step of 0.1%, still with 10 replications:
val exploration =
  ExplorationTask(
    (density in (50.0 to 75.0 by 0.1)) x
    (seed in (UniformDistribution[Int]() take 10))
  )
  



This gives us the following scatter plot, where we added an "estimator" (or predictor) curve (geom_smooth() from ggplot2 R library to be precise) that acts like a floating estimator in the collection of points, giving the statistical tendency of the Y-Axis values of points along the X-axis.

image/svg+xml