Documentation > Plug
The
The
Download the Gama Docker image. All available containers can be found on the docker hub. Here, we will use the alpha version.
Once inside the Docker container, you're in the /opt/gama-platform/headless directory, and the model is in the folder /work. You can then run GAMA commands to run your model.
Preliminary remarks 🔗
GAMA is a modelling and simulation development environment for building spatially explicit agent-based simulations. OpenMOLE supports GAMA model natively through theGAMATask.
The
GAMATask uses the Singularity container system. You should install Singularity on your system otherwise you won't be able to use it.The
GAMATask supports files and directories, in and out. Get some help on how to handle it by reading this page.
The GAMATask 🔗
GAMA by example 🔗
You can provide your .gaml file to theGAMATask to run your model and explore it with OpenMOLE.
The example below illustrates an exploration of the predator-prey model of the GAMA model library using a direct sampling:
// Declare the variables
val numberOfPreys = Val[Double]
val nbPreysInit = Val[Int]
val mySeed = Val[Long]
// Gama task
// The first argument is the project directory
// The second argument is the relative path of the gaml file in the project directory
// The third argument is the number of steps
val gama =
GAMATask(project = workDirectory / "predator", gaml = "predatorPrey.gaml", finalStep = 100, seed = mySeed) set (
inputs += (nbPreysInit mapped "nb_preys_init"),
outputs += (numberOfPreys mapped "nb_preys")
)
// Explore and replicate the model
DirectSampling(
evaluation =
Replication(
evaluation = gama,
seed = mySeed,
sample = 10,
aggregation = Seq(numberOfPreys evaluate average)) hook(workDirectory / "result"),
sampling = nbPreysInit in (0 to 200 by 50)
) hook display
Task arguments 🔗
The GAMA task uses the following arguments:projectFile, the location of your GAMA project directory, mandatory, for instanceproject = workDirectory / "gamaproject"gamlString, the relative path of your .gaml file in your work directory, mandatory, for instancegaml = "model/model.gaml"finalStepInt, the last simulation step of you simulation, it must be set if no stopining condtion is setstopConditionString, a stoping condition for your simulation in GAMA, for instance(cycle=10 or nb_preys<10), it must be set if no finalStep is set, in case both are set only the stopping condition in taken into accoountseedLong, the OpenMOLE variable used to set the GAMA random number generator seed, optional the seed is randomly drawn if not setparentExperimentString, the name of a parent experiment for the generated OpenMOLE experimentversionString, optional. The version of GAMA to run.containerImagethe label of a container image or a container file containing GAMA headless, optional, the default value is "gamaplatform/gama:1.9.0"memorythe memory allocated to the gama headless, optional, for examplememory = 3000 megabytesinstallsome command to run on the host system to initialise the container, for instanceSeq("apt update", "apt install mylib"), optional
Reproduce an error using GAMA headless 🔗
The integration of GAMA into OpenMOLE is achieved through a container. OpenMOLE downloads the container for the version you're interested in and use it to run your simulation. Sometimes, issues can arise in the communication between these two tools. To get a clearer understanding, it's important to determine which of the two is causing the problem. Here, using the predator-prey model as a basis, we suggest testing your model within the virtual machine that will be used by OpenMole. This allows you to perform diagnostics in case of a failure. OpenMOLE communicates with GAMA by defining an experiment and then running the batch mode of 'gama-headless.sh'. Therefore, to run GAMA in headless mode, you need to:- write an experiment file, that import you model,
- launch the simulation using this experiment.
model openmoleexplorationmodel
import 'mymodel.gaml'
experiment _openMOLEExperiment_ {
float seed <- 42;
//Set some parameters
parameter var:nb_preys_init <- 100;
reflex stop_reflex when:cycle=10 {
// Some outputs to save in the json file
map _outputs_ <- [
"nb_preys"::nb_preys
];
save to_json(_outputs_) to:"om_output.json" format:"txt";
do die;
}
}
Modify it to match your model and save it as 'experiment.gaml'. Make sure the import path is coherent with the location of you model relative to the location of the experiment file.
Using a existing GAMA installation 🔗
You can run:./gama-headless.sh -batch _openMOLEExperiment_ experiment.gaml
Once the headless has run, you should find a file named om_output.json. Check that it exists, is well formed and contains the simulation results. Otherwise you can check you model or experiment for errors. If you find a bug in GAMA, you can report it to the GAMA developers on Github.
Using Docker 🔗
To reproduce a behaviour close from what OpenMOLE executes you can run the GAMA docker image. You should have docker installed on your computer.Download the Gama Docker image. All available containers can be found on the docker hub. Here, we will use the alpha version.
export GAMA_VERSION="alpha" # Replace by the version you want to test
docker pull gamaplatform/gama:$GAMA_VERSION
To get an interactive shell into the GAMA docker:
docker run -it -v "/tmp/gama model/":/work --entrypoint /bin/bash gamaplatform/gama:$GAMA_VERSION
-it stands for interactive terminal. -v "/tmp/gama model/":/work mounts is a volume between your host system and the container.
/tmp/gama model/ is a folder on your computer. /work is the folder inside the container. This mount allows the container to access the model files.
Once inside the Docker container, you're in the /opt/gama-platform/headless directory, and the model is in the folder /work. You can then run GAMA commands to run your model.
./gama-headless.sh -batch _openMOLEExperiment_ /work/experiment.gaml
Once the headless has run, you should find a file named om_output.json. Check that it exists, is well formed and contains the simulation results. Otherwise, you can check your model for error. If you find a bug in GAMA, you can report it to the GAMA developers on Github.