This repository contains the configuration for the Biomass Dashboard API datasets, products, and country pilots.
For each of these entity types, the pattern is to create a config file or files for that entity, and then
put the id
value in the config.yml file. Note that in other versions of the dashboard, the values in config.
yaml might be the prefix or entire name of the json files, but in this project, it always the id
used within the json file.
The datasets
directory contains a directory for the available datasets. Each dataset is a json
file (see example: datasets/NCEO_Africa.json)
Each dataset ID (not the name of the file) must be included in the DATASETS
array in config.yml.
The products
directory contains a directory for each product. Within each directory, there are two files:
- a json file containing the definition of the product
summary.html
- an HTML fragment that's used as the summary description in the dashboard
Each product ID (not the name of the file) must be included in the PRODUCTS
array in config.yml.
The global
product is used for the All or Global product in the dashboard.
The country_pilots
directory contains a directory for each country pilot. Within each directory, there are two files:
- a json file containing the definition of the country pilot
summary.html
- an HTML fragment that's used as the summary description in the dashboard
Each country pilot ID (not the name of the file) must be included in the COUNTRY_PILOTS
array in config.yml.
By default, these dashboard api configuration files will be deployed to S3 via a GitHub Action. These will be deployed for the branches main, staging, and production.
To configure this:
- In the GitHub repository, add secrets (Settings -> Secrets) for accessing AWS (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
- Update config.yml with the appropriate BUCKET configuration.
- Push config.yml to GitHub and verify it runs correctly. Note only branches configured in
.github/workflows/update-datasets-and-sites.yml
will run the workflow (generate the datasets/sites metadata files).
If you wish to deploy manually, e.g., to test using a custom stage named for you.
This will create and copy the configuration files to the S3 location indicated in BUCKET
and print the final JSON description.
- Update config.yml with the appropriate BUCKET configuration
- Export a shell variable for
STAGE
, e.g.,export STAGE=pvarner
- Run the generators.
export STAGE=local
python dataset_metadata_generator/src/main.py | jq .
python products_metadata_generator/src/main.py | jq .
python country_pilots_metadata_generator/src/main.py | jq .