========== Quickstart ========== To start calibrating an epidemic model using OptiLog_ and PyDGGA_, follow the steps below: ------------- Prerequisites ------------- Ensure you have Docker_ installed in the machine where the calibration process will run. To install Docker_ in a machine, please follow the instructions provided in the `official guides `_ Obtain a PyDGGA_ license (`.pydgga.lic` file). To request a license, please write to carlos.ansotegui@gmail.com --------------------- Download the template --------------------- First, obtain the template from the **Logic and Optimization Group - Software** `page `_, or follow this `link `_. Inside the compressed file you will find a folder named **EpidemicGga**, which is the template ready to be filled, an **Examples** folder, which contains examples of the template filled with some epidemic models, and the **Manual.pdf** of the project. The description of the template can be found at :ref:`Template Description ` ---------------------- Implementing the model ---------------------- Inside the template, there is a project folder with a **model.py** file. This file is the one which will contain all the implementation of the epidemic model. First, implement the :code:`entrypoint` function. This function will be responsible to load the data of the evolution of the infection, and call the model. It has two parameters: data and seed. The data is the filename of the data that will be used to compute the fit error of the model, and the seed will be an integer that can be used to ensure reproducibility on models with stochastic components. For example, a simple entrypoint could be: .. code-block:: python import pandas def entrypoint(data, seed): data = pandas.read_csv(data) cost = model(data) print(f"Result: {cost}") Then, implement the :code:`model` function. It will have two types of parameters: Configurable parameters Those parameters will represent the model variables and will be calibrated. Those parameters do not need to be set by the entrypoint function. Non-configurable parameters Other parameters such as the dataset, the seed, number of days to consider... Those must be set by the entrypoint function. The configurable parameters must be annotated as explained in `OptiLog official documentation `_. For example, a model which receives the data, and has two biological variables :math:`\beta` and :math:`\delta` can be defined as: .. code-block:: python @ac def model(data, beta=Real(0.0, 1.0) = 0.5, delta=Real(0.0, 1.0) = 0.5): ... ------------------------- Preparing the environment ------------------------- Usually the models will require external Python packages to be available at runtime. Common packages are Pandas_ or Numpy_. It is a good practice to list those packages in a **requirements.txt** file, so all the packages can be installed with: .. code-block:: bash pip install -r requirements.txt .. warning:: It is mandatory to define the packages if you plan to use Docker to calibrate the model, as by default the Docker image used will have only installed OptiLog. In some cases though, installing `Python packages`_ will not be enough, as they might have other system dependencies. To install external dependencies when using Docker_, you can use Apt_ [#]_. For example, to install the fictional packages *myPackage1* and *myPackage2*, you can modify the provided **Dockerfile**. First, uncomment the following lines and add add the package(s) to MODEL_DEPENDENCIES, as: .. code-block:: docker [...] ENV DEBIAN_FRONTEND noninteractive ENV MODEL_DEPENDENCIES "myPackage1 myPackage2" RUN apt-get update \ && apt-get -y install \ ${MODEL_DEPENDENCIES} \ && apt-get clean [...] ---------------------------------------- Run the calibration process using Docker ---------------------------------------- .. warning:: In order to run the calibration process, you need a PyDGGA license. To calibrate the model locally, copy the license file to your home directory with the name :code:`.pydgga.lic`. To use docker, copy the license file to the template directory with the name :code:`.pydgga.lic` (so it will automatically be copied inside the image). To calibrate the model using Docker_, start by creating a Docker image with your model and your data: .. code-block:: bash # Inside the template folder docker build -t ${name} . where :code:`${name}` is the name of the image and optionally a tag. More info about the names of the images can be found at `Docker official documentation `_. To simplify the process, the provided **Makefile** also has a target to build the image. In the template directory, simply run :code:`make build` after changing the image name in the **Makefile**. .. warning:: Change the :code:`IMAGE_NAME=...` line in the **Makefile** before running :code:`make build`. Once the image is created you can run it by using .. code-block:: bash docker run ${image_name} To run the Docker image in the cloud, please see :ref:`Running in the cloud `. .. rubric:: Footnotes .. [#] The provided Docker image is based on Debian.