Running experiments

Table of contents

These instructions assume that you have a working dev container virtual machine set up in a Github Codespace or similar environment.

G-Cubed simulation experiments are run using Python scripts. Those scripts are contained in the model’s python folder at /user-data/gcubed-2R/python.

The anatomy of a G-Cubed simulation experiment

A typical G-Cubed simulation experiment involves the following steps.

Solve the model

Solving the model involves obtaining a way of producing expectations-consistent projections of all endogenous variables in the system given a set of exogenous variable projections and a set of starting values for state variables. This solution is known as the model solution.

Generate baseline projections

Apply the model solution to a baseline set of exogenous variable projections and a set of starting values for state variables to produce expectations-consistent projections of all endogenous variables in the system. These projections are known as the baseline projections . Model users have a great deal of control over the exogenous variable projections in the baseline, as described in the documentation of how baseline projections are formed.

Produce layers of simulation projections

One or more layers of simulation projections are generated, by applying layers of new adjustments or shocks to the exogenous variables in the model.

Review deviations from baseline projections

Compare the baseline projections to a new set of projections (the simulation projections) that are produced by applying the model solution to the original state variable starting values and an adjusted or “shocked” set of exogenous variable projections. The difference between the baseline projections and the new projections is known as the deviation projections.

Experiments can be made considerably more complex than this simple outline described above, by adding additional layers of shocks to the exogenous variables. Each layer of shocks to exogenous variables is known as a simulation layer.

The process of obtaining a model solution is computationally intensive. Fortunately, both the model solution and the baseline projections can be reused across different experiments. A good practice for re-using both the model solution and the baseline projections is incorporated into the recommmended process of running experiments.

The python folder

The python folder contains the following files:

  • parameters_2R_178.py: a file that contains the model parameters calibration class. It is required when the model is run. It should not be altered.
  • model_constants_2R_178.py: a file that contains convenience variables and functions that assist with running the model. It can be edited to include relevant details of new simulation experiments as they are created.
  • run_fast_baseline.py: a Python script that produces and saves the model solution and the baseline projections for later re-use.
  • share_baseline_projections_with_experiments.py: a Python script that makes it easy to reuse the model solution and the baseline projections with the experiments.
  • Various other Python scripts (files with a .py extension) that each are responsible for running a specific simulation experiment, (e.g. run_fast_experiment1.py).

Running the Python scripts in order

The recommended process for running experiments is as follows:

  1. Run the run_fast_baseline.py script to create the results folder (user_data/results/run_fast_baseline) and to produce the model solution and the baseline projections. These are saved to disk as files with a .pickle extension in the results folder. To modify the model (e.g. change parameters) or to modify the baseline projections, delete the .pickle files in the results folder and re-run the run_fast_baseline.py script.

  2. Run the share_baseline_projections_with_experiments.py script to create separate results directories for all of the experiments (all scripts with a run_ name prefix) and to create symbolic links to the model solution and the baseline projections in each of those directories. Note that when the model solution and baseline are regenerated, the symbolic links to their original versions will automatically update to their new versions. This is convenient, but it can lead to unexpected changes in results if you are not aware.

  3. Run the script for the experiment of interest.

Experiment Python scripts

Each experiment is a Python file with a run_ filename prefix, for example run_fast_experiment1.py. Each experiment is associated with an experiment design and a set of one or more simulation layers (sets of shocks to exogenous variables).

Running an experiment

There are two ways to run an experiment.

You can run an experiment by opening the python file for the experiment in your VS Code editor. A small arrowhead, pointing to the right, will appear in the top right corner of your editor tab when you open an experiment Python script for editing. The experiment will run if you click on that small arrowhead. The run log will appear live in a terminal that will open at the bottom of the VS Code window and you will see the details of the experiment as it runs.

Alternatively you can run an experiment from the commandline e.g.:

/user_data/$python3 run_fast_experiment1.py

Typically, experiments are run using a Runner object. There are a number of Runner classes. They run the experiment and save the results to the results folder. A Runner will run an entire experiment, using a model solution and building on a set of baseline projections by sequentially applying the sequence of simulation layers to those baseline projections.

The projections associated with the application of each simulation layer are preserved as a Projections object. To access the full list of projections produced when running an experiment, access the Runner.all_projections property. Each Projections object provides access to the projections implied by the baseline and the cumulative impact of the simulation layers up to and including its own simulation layer.

In this way, the Runner object provides a way to analyse the marginal and cumulative impacts of each simulation layer. To assess the marginal impact of a simulation layer work with deviations of the simulation layer’s projections from the projections associated with the previous simulation layer. To assess the cumulative impact of all simulation layers up to and including the simulation layer of interest, work with deviations of that simulation layer’s projections from the baseline projections.

Experiment results

Experiments are saved to the results folder in /workspace/user_data/results/2R/178/<experiment script name>. Thus, running run_fast_experiment1.py, the results will be available in /workspace/user_data/results/2R/178/run_fast_experiment1.

The results folder will contain the run.log file and any CSV projections files (in levels or deviations) that were produced by the experiment.

The projections produced by an experiment can be viewed in the online charting application.