Skip to content

Latest commit

 

History

History
22 lines (17 loc) · 4.66 KB

README.md

File metadata and controls

22 lines (17 loc) · 4.66 KB

Executing pyCIAM

This README describes the workflow used to produce results contained in Depsky et al. 2023. The list of notebooks necessary to run an example pyCIAM workflow or to recreate the full set of Depsky et al. 2023 results are contained in run_example.sh and run_full_replication.sh, respectively.

The aggregated coastal input dataset required for pyCIAM is SLIIDERS. Alternatively, users may construct their own inputs, for example to integrate alternative underlying data layers. In this case, they must still conform to the format of the SLIIDERS dataset. We would recommend starting from the SLIIDERS construction code found in the SLIIDERS repository

A set of common filepaths, settings, and helper functions used for this worfkflow are contained in shared.py. These should be adjusted as needed. In particular, you will need to adjust the filepaths to suit your data storage structure.

The following notebooks should be run in the described order to replicate the manuscript results.

  1. data-acquisition.ipynb: This notebook downloads all input data necessary to replicate the results of Depsky et al. 2023, with options to download only a subset necessary to run an example pyCIAM model.
  2. data-processing/collapse-sliiders-to-seg.ipynb: SLIIDERS is provided where each analysis unit corresponds to a unique combination of admin1 region and coastal segment. This is helpful for aggregating results to admin1-level outputs, since the decision-making agent must occur at the segment level. For certain use cases, e.g. creating the surge lookup table, the additional admin1 dimension is unnecessary and leads to excess computational demands. Thus, we collapse the dataset to the segment level. This notebook would not be necessary if, for example, a user created a SLIIDERS alternative that was only indexed by segment.
  3. data-processing/create-diaz-pyCIAM-inputs.ipynb: This notebook generates a SLIIDERS-like input dataset that reflects the inputs used in Diaz 2016. This is necessary for comparisons of results from the original CIAM paper to the updated version. These comparsions are performed and reported on in Depsky et al. 2023.
  4. data-processing/slr/AR6.ipynb: This notebook processes SLR projections based on AR6 emissions scenarios from the FACTS SLR framework.
  5. data-processing/slr/sweet.ipynb: This notebook processes FACTS-generated projections grouped by end-of-century GMSL level as in Sweet et al. 2022.
  6. data-processing/slr/AR5: These notebooks run LocalizeSL (the predecessor to FACTS) on a variety of SLR scenarios based largely on the IPCC AR5 emissions scenarios. See the README inside this folder for more details.
  7. models/create-surge-lookup-tables.ipynb: This notebook creates segment-adm1-specific lookup tables that estimate fraction of total capital stock lost and fraction of total population killed as a function of extreme sea level height. Computing these on the fly for a large number of SLR simulations is computationally intractable given the numerical integration needed, so lookup tables are used to enable these calculations.
  8. models/fit-movefactor.ipynb: This notebook performs the empirical estimation of the relocation cost parameter movefactor, as detailed in Depsky et al. 2023. It is purely for analysis and does not create any output datasets necessary for other notebooks.
  9. models/run-pyCIAM-slrquantiles.ipynb: This notebook is just a thin wrapper to call execute_pyciam() using appropriate inputs.
  10. models/run-pyCIAM-diaz2016.ipynb: This notebook is just a thin wrapper to call execute_pyciam() using inputs and configuration consistent with Diaz 2016. These outputs are used for validation and comparison within Depsky et al. 2023.
  11. post-processing/pyCIAM-results-figures.ipynb: This notebook generates numbers and figures used in Depsky et al. 2023.
  12. post-processing/zenodo-upload.ipynb: This notebook can be used by core model developers to upload new versions of SLIIDERS and/or model outputs to Zenodo. It will not need to be used by typical users.