Simulation Tasking
Run simulations with Hatchet and Docker#
This guide walks you through running globi simulations end‑to‑end using Hatchet and Docker.
It assumes you have already completed the setup guide, including:
- cloning the
globirepo - installing dependencies with
uv sync --all-extras --all-groups - installing Docker, Git, Python 3.12+, and
make
The steps below cover:
- starting the Hatchet server and simulation engine
- configuring environment files and tokens
- submitting a simulation manifest
- monitoring runs in the Hatchet UI
- fetching and storing results
- safely shutting everything down
Before you start#
- docker running: make sure Docker Desktop (or the Docker daemon) is running.
- terminal location: run commands from the repository root (the folder containing
Makefileandpyproject.toml). - network access: the first run may download container images from remote registries and can take several minutes.
Note
the commands in this guide are the same for macOS, linux, and windows (using a unix‑like shell such as git bash or wsl).
Step 1: Start the Hatchet server#
The Hatchet server provides the UI and orchestration backend for managing workflows.
Note
On your first run in this repository, you do not need to run make hatchet-lite separately. The make hatchet-token command in Step 2 will automatically start the Hatchet server if it is not already running.
If you already have a Hatchet token configured and just need to ensure the server is running, you can start it manually:
This:
- builds and/or pulls the
hatchet-litedocker image - starts the Hatchet server container in the background
- exposes the Hatchet UI on
http://localhost:8888
Note
the first run may take several minutes while docker downloads and builds images. later runs are much faster.
You can verify the container is up by running:
Look for a hatchet-lite container with a running status.
Step 2: Create and configure Hatchet environment files#
Hatchet uses a client token stored in environment files that are loaded by the make cli (dockerized) or make cli-native (non-dockerized) target.
- Generate a Hatchet client token:
This will:
- ensure
hatchet-liteis running - execute the Hatchet admin command inside the container
-
print a
HATCHET_CLIENT_TOKENvalue in your terminal -
Copy the token into your Hatchet env files.
In your terminal output, locate a line similar to:
Open your Hatchet environment file(s), for example:
.env.local.host.hatchet
and add or update the line:
- Save the files.
Warning
treat your HATCHET_CLIENT_TOKEN like a password. do not commit it to git, and do not share it publicly.
Tip
if you see example files such as .env.local.host.hatchet.example, copy them once and then edit the resulting .env files:
then replace the placeholder token with the real one.
Step 3: Start the simulation engine and workers#
Now start the full engine stack, which includes:
- Hatchet server
- Simulation workers
- Fanout workers
- Any required supporting services
Run:
This command:
- composes
docker-compose.yml,docker-compose.hatchet.yml, anddocker-compose.aws.yml - builds images if needed
- starts all services in the background with
-d
You can check container status with:
You should see containers for Hatchet and the simulation services with a running status.
Note
on macos, you may occasionally see an error like:
target simulations: failed to solve: image ".../hatchet/globi:latest": already exists
make: *** [engine] Error 1
this is usually transient. re‑run:
and the issue should resolve.
Step 4: Access the Hatchet UI#
Open your browser and go to:
On the first run, Hatchet may prompt you to create or confirm an admin account in the terminal where the container is running.
For the local hatchet-lite instance, you can use:
In the Hatchet UI:
- navigate to workers
- verify that the expected workers are running and healthy
If you do not see workers, refer to the troubleshooting section below.
Step 5: Run a test simulation#
Now you can submit a simulation manifest via the make cli (dockerized) or make cli-native (non-dockerized) target, which wraps the globi CLI with the correct environment files.
Warning
All input files referenced in your manifest (including the manifest itself, artifacts, component maps, semantic fields, GIS files, etc.) must be located in the inputs/ folder or subdirectories within it. Ensure all file paths in your manifest and artifacts configuration are relative to the inputs/ directory.
-
Confirm the engine is running:
- ensure
make enginehas completed without errors - verify containers are running with
docker compose ... ps
- ensure
-
Prepare your manifest.
Your manifest file should be in the
inputs/directory, for example:All files referenced by the manifest (artifacts, component maps, semantic fields, GIS data, etc.) should also be in
inputs/or subdirectories. -
Submit the manifest:
# dockerized make cli submit manifest -- --path inputs/manifest.yml --grid-run --max-tests 100 # non-dockerized make cli-native submit manifest -- --path inputs/manifest.yml --grid-run --max-tests 100Warning
critical: you must include the two dashes
--aftermanifestand before the--pathoption. this separator is required to pass arguments correctly to the underlying CLI command. if you forget it, the command will fail with an error.The command structure is:
# dockerized make cli submit manifest -- --path {PATH_TO_MANIFEST} [OPTIONAL_FLAGS] # non-dockerized make cli-native submit manifest -- --path {PATH_TO_MANIFEST} [OPTIONAL_FLAGS]where:
{PATH_TO_MANIFEST}is your manifest file path (for exampleinputs/manifest.yml)--grid-runenables grid‑style execution over the manifest configuration
Optional flags:
--max-tests {NUMBER}: override the maximum number of tests in a grid run (default: 1000). example:--max-tests 100--scenario {SCENARIO_NAME}: override the scenario listed in the manifest file with the provided scenario--skip-model-constructability-check: skip the model constructability check (flag, no value)--epwzip-file {PATH}: override the EPWZip file listed in the manifest file with the provided EPWZip file
Example with multiple optional flags:
-
Monitor progress in the Hatchet UI:
- go to
http://localhost:8888 - navigate to workflows or runs
- locate the workflow corresponding to your manifest submission
- watch status transition from
pending→running→completed(orfailedif there is an error)
- go to
You can click into the workflow to view task‑level logs and any errors.
-
Note the run_name from the output:
When the simulation completes, the CLI prints a summary with a
run_name(for exampleTestRegion/dryrun/Baseline). save this run_name — you will need it to fetch results in the next step.Note
results are stored in cloud storage (S3) and are not automatically downloaded to your local machine. see Step 6 for instructions on accessing results.
Step 6: Access simulation results#
When a simulation completes, the CLI prints a summary similar to:
versioned_experiment:
base_experiment:
experiment: scythe_experiment_simulate_globi_building
run_name: TestRegion/dryrun/Baseline
storage_settings:
BUCKET: test-bucket
BUCKET_PREFIX: globi
version:
major: 1
minor: 0
patch: 0
timestamp: '2026-01-27T22:35:23.417925'
Important
results are stored in a bucket, not automatically saved to your local machine. you must use the get experiment command to download results to your local filesystem.
- run_name identifies the specific run (for example
TestRegion/dryrun/Baseline) - version is a semantic version (major.minor.patch) of the experiment configuration
- storage_settings shows the S3 bucket and prefix where results are stored
Where results are stored#
After submission, simulation results are:
- stored in cloud storage (S3 bucket configured in your environment)
- organized by run_name and version in the cloud
- not automatically downloaded to your local machine
To access results locally, you must fetch them using the get experiment command described below.
Fetch the latest version of a run#
Copy the run_name from the terminal output and run:
# dockerized
make cli get experiment -- --run-name {YOUR_RUN_NAME_HERE}
# non-dockerized
make cli-native get experiment -- --run-name {YOUR_RUN_NAME_HERE}
For example, if your run_name is TestRegion/dryrun/Baseline:
This command:
- downloads the latest version of the experiment from cloud storage
- saves results to
outputs/{run_name}/{version}/Results.pqby default - prints the exact location where files were saved
- automatically generates CSV and Excel files for the
Resultsdataframe
Example output structure:
outputs/
└── TestRegion/
└── dryrun/
└── Baseline/
└── 1.0.0/
├── Results.pq # parquet file
├── Results.csv # csv export
└── Results.xlsx # excel workbook with multiple sheets
Fetch a specific version and output directory#
If you have multiple versions of the same run, or you want to control exactly where results are written, include --version and --output_dir:
# dockerized
make cli get experiment -- \
--run-name {YOUR_RUN_NAME_HERE} \
--version {VERSION} \
--output_dir {YOUR_CHOSEN_OUTPUT_DIR}
# non-dockerized
make cli-native get experiment -- \
--run-name {YOUR_RUN_NAME_HERE} \
--version {VERSION} \
--output_dir {YOUR_CHOSEN_OUTPUT_DIR}
where:
{VERSION}is of the formmajor.minor.patch(for example1.0.0){YOUR_CHOSEN_OUTPUT_DIR}is a local path where you want results saved
Additional options:
--dataframe-key {KEY}: specify which dataframe to download (default:Results). other options may includeHourlyDataif hourly data was configured--include-csv: include CSV export in addition to parquet (CSV is automatically included forResultsdataframe)
Example with all options:
# dockerized
make cli get experiment -- \
--run-name TestRegion/dryrun/Baseline \
--version 1.0.0 \
--output_dir outputs/my_analysis \
--include-csv
# non-dockerized
make cli-native get experiment -- \
--run-name TestRegion/dryrun/Baseline \
--version 1.0.0 \
--output_dir outputs/my_analysis \
--include-csv
Tip
choose an output directory under a dedicated folder (for example outputs/) to keep simulation results organized by run and version.
Warning
critical: you must include the two dashes -- after experiment and before the --run-name option. this separator is required to pass arguments correctly to the underlying CLI command.
Step 7: Shut down Docker services#
When you are done running simulations, you can stop all related Docker containers with:
This:
- stops and removes containers from
docker-compose.yml,docker-compose.hatchet.yml, anddocker-compose.aws.yml - keeps docker images on disk so future runs start faster
Run make engine again the next time you want to use the system.
Troubleshooting#
This section lists common issues and concrete steps to diagnose and fix them.
Docker and container issues#
-
docker daemon not running
-
ensure Docker Desktop (macOS/windows) or the docker service (linux) is running
-
verify with:
-
containers not staying up
-
check logs for a specific service, for example:
-
look for configuration or startup errors in the log output
-
image already exists error when running
make engine -
if you see:
-
simply re‑run:
-
if the error persists, run:
-
port 8080 already in use
-
if
hatchet-litefails to start because port8080is in use:- close any other application using port
8080 - or stop the conflicting container/process
- then re‑run
make hatchet-liteormake engine
- close any other application using port
Hatchet UI and API issues#
-
cannot load
http://localhost:8888 -
verify that
hatchet-liteis running: -
if the container is not
running, start it: -
if it still fails, inspect logs:
-
workers not appearing in Hatchet UI
-
ensure the engine stack is running:
-
check for worker containers in
docker compose ... ps - open Hatchet UI → workers and verify that they show as healthy
- if workers crash repeatedly, inspect their logs using
docker compose ... logs <service-name>
Token and environment configuration issues#
-
token errors or unauthorized requests
-
confirm
HATCHET_CLIENT_TOKENis set in your Hatchet env file(s), for example: -
ensure there are no extra quotes or spaces around the value
-
if you suspect the token is invalid or expired:
then update the env files with the new token.
-
env file not being loaded
-
make cliandmake cli-nativeload environment from:.env.$(AWS_ENV).aws(default:.env.local.host.aws).env.$(HATCHET_ENV).hatchet(default:.env.local.host.hatchet).env.scythe.fanouts.env.scythe.storage
-
verify these files exist and contain the expected variables
Simulation and worker issues#
-
jobs stuck in
pending -
check that workers are running (Hatchet UI → workers)
- confirm worker containers are healthy with
docker compose ... ps -
inspect worker logs for errors (for example configuration or connectivity issues)
-
workflow fails immediately after submission
-
open the workflow in the Hatchet UI and inspect task logs
- common causes:
- invalid manifest path (
--pathdoes not exist) - missing the
--separator aftermanifest(must be:make cli-native submit manifest -- --path ...) - input files not in the
inputs/folder - missing or incorrect environment variables
- storage configuration issues (for example s3 bucket permissions)
- invalid manifest path (
Python and uv issues#
-
module not foundor missing dependency -
re‑sync dependencies:
-
or run the project install target:
-
python version error
-
confirm that your python version is 3.12 or higher:
-
if needed, install python 3.12 with
uv(see the setup guide).
Quick reference#
Essential commands#
# start hatchet server (ui and api) - only needed if you already have a token
make hatchet-lite
# generate hatchet token and print to terminal (starts hatchet-lite automatically on first run)
make hatchet-token
# start full engine stack (hatchet + workers + services)
make engine
# submit a simulation manifest (note the -- separator is required!)
# dockerized
make cli submit manifest -- --path inputs/manifest.yml --grid-run --max-tests 100
# non-dockerized
make cli-native submit manifest -- --path inputs/manifest.yml --grid-run --max-tests 100
# dockerized
make cli get experiment -- --run-name {YOUR_RUN_NAME_HERE}
# non-dockerized
make cli-native get experiment -- --run-name {YOUR_RUN_NAME_HERE}
# stop and remove all related docker containers
make down
# open hatchet ui
open http://localhost:8888 # macos
# or manually paste http://localhost:8080 into your browser
Key file locations#
- environment config:
.env.*files used bymake cliandmake cli-native - input files:
inputs/directory (all manifest and data files must be here) - hatchet configuration:
hatchet.yaml - make targets:
Makefile