diff --git a/book/09_Scratch/about_the_tutorial/Assessment_Activities.md b/about_the_tutorial/Assessment_Activities.md
similarity index 100%
rename from book/09_Scratch/about_the_tutorial/Assessment_Activities.md
rename to about_the_tutorial/Assessment_Activities.md
diff --git a/book/09_Scratch/about_the_tutorial/Learner_Personas.md b/about_the_tutorial/Learner_Personas.md
similarity index 100%
rename from book/09_Scratch/about_the_tutorial/Learner_Personas.md
rename to about_the_tutorial/Learner_Personas.md
diff --git a/book/09_Scratch/about_the_tutorial/Module_Objectives.md b/about_the_tutorial/Module_Objectives.md
similarity index 100%
rename from book/09_Scratch/about_the_tutorial/Module_Objectives.md
rename to about_the_tutorial/Module_Objectives.md
diff --git a/book/09_Scratch/about_the_tutorial/Outcomes.md b/about_the_tutorial/Outcomes.md
similarity index 100%
rename from book/09_Scratch/about_the_tutorial/Outcomes.md
rename to about_the_tutorial/Outcomes.md
diff --git a/book/09_Scratch/about_the_tutorial/Requirements.md b/about_the_tutorial/Requirements.md
similarity index 100%
rename from book/09_Scratch/about_the_tutorial/Requirements.md
rename to about_the_tutorial/Requirements.md
diff --git a/book/09_Scratch/about_the_tutorial/disenio_leccion.md b/about_the_tutorial/disenio_leccion.md
similarity index 100%
rename from book/09_Scratch/about_the_tutorial/disenio_leccion.md
rename to about_the_tutorial/disenio_leccion.md
diff --git a/book/02_Software_Tools/02_Data_Visualization_Tools.md b/book/02_Software_Tools/02_Data_Visualization_Tools.md
index 60f61b9..96a1812 100644
--- a/book/02_Software_Tools/02_Data_Visualization_Tools.md
+++ b/book/02_Software_Tools/02_Data_Visualization_Tools.md
@@ -41,7 +41,7 @@ from geoviews import opts
### Displaying a basemap
-A *basemap* or *tile layer* is useful when displaying vector or raster data because it allows us to overlay the relevant geospatial data on a familar gepgraphical map as a background. The principal utility is we'll use is `gv.tile_sources`. We can use the method `opts` to specify additional confirguration settings. Below, we use the *Open Street Map (OSM)* Web Map Tile Service to create the object `basemap`. When we display the repr for this object in the notebook cell, the Bokeh menu at right enables interactive exploration.
+A *basemap* or *tile layer* is useful when displaying vector or raster data because it allows us to overlay the relevant geospatial data on a familar geographical map as a background. The principal utility is we'll use is `gv.tile_sources`. We can use the method `opts` to specify additional confirguration settings. Below, we use the *Open Street Map (OSM)* Web Map Tile Service to create the object `basemap`. When we display the repr for this object in the notebook cell, the Bokeh menu at right enables interactive exploration.
```python jupyter={"source_hidden": true}
diff --git a/book/07_Wildfire_analysis/Retrieving_Disturbance_Data.md b/book/07_Wildfire_analysis/Retrieving_Disturbance_Data.md
deleted file mode 100644
index 066a717..0000000
--- a/book/07_Wildfire_analysis/Retrieving_Disturbance_Data.md
+++ /dev/null
@@ -1,329 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.1
- kernelspec:
- display_name: Python 3 (ipykernel)
- language: python
- name: python3
----
-
-# Retrieving Disturbance Data
-
-The [OPERA DIST-HLS data product](https://lpdaac.usgs.gov/documents/1766/OPERA_DIST_HLS_Product_Specification_V1.pdf) can be used to study the impacts and evolution of wildfires at a large scale. In this notebook, we will retrieve data associated with the [2023 Greece wildfires](https://en.wikipedia.org/wiki/2023_Greece_wildfires) to understand its evolution and extent. We will also generate a time series visualization of the event.
-
-In particular, we will be examining the area around the city of [Alexandroupolis](https://en.wikipedia.org/wiki/Alexandroupolis) which was severely impacted by the wildfires, resulting in loss of lives, property, and forested areas.
-
-```python
-# Plotting imports
-import matplotlib.pyplot as plt
-from matplotlib.colors import ListedColormap
-from rasterio.plot import show
-from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
-
-# GIS imports
-from shapely.geometry import Point
-from osgeo import gdal
-from rasterio.merge import merge
-import rasterio
-import contextily as cx
-import folium
-
-# data wrangling imports
-import pandas as pd
-import numpy as np
-
-# misc imports
-from datetime import datetime, timedelta
-from collections import defaultdict
-
-# STAC imports to retrieve cloud data
-from pystac_client import Client
-
-# GDAL setup for accessing cloud data
-gdal.SetConfigOption('GDAL_HTTP_COOKIEFILE','~/cookies.txt')
-gdal.SetConfigOption('GDAL_HTTP_COOKIEJAR', '~/cookies.txt')
-gdal.SetConfigOption('GDAL_DISABLE_READDIR_ON_OPEN','EMPTY_DIR')
-gdal.SetConfigOption('CPL_VSIL_CURL_ALLOWED_EXTENSIONS','TIF, TIFF')
-```
-
-```python
-# Define data search parameters
-
-# Define AOI as left, bottom, right and top lat/lon extent
-dadia_forest = Point(26.18, 41.08).buffer(0.1)
-
-# We will search data for the month of March 2024
-start_date = datetime(year=2023, month=8, day=1)
-stop_date = datetime(year=2023, month=9, day=30)
-```
-
-```python
-# We open a client instance to search for data, and retrieve relevant data records
-STAC_URL = 'https://cmr.earthdata.nasa.gov/stac'
-
-# Setup PySTAC client
-# LPCLOUD refers to the LP DAAC cloud environment that hosts earth observation data
-catalog = Client.open(f'{STAC_URL}/LPCLOUD/')
-
-collections = ["OPERA_L3_DIST-ALERT-HLS_V1"]
-
-# We would like to search data for August-September 2023
-date_range = f'{start_date.strftime("%Y-%m-%d")}/{stop_date.strftime("%Y-%m-%d")}'
-
-opts = {
- 'bbox' : dadia_forest.bounds,
- 'collections': collections,
- 'datetime' : date_range,
-}
-
-search = catalog.search(**opts)
-```
-
-NOTE: The OPERA DIST data product is hosted on [LP DAAC](https://lpdaac.usgs.gov/news/lp-daac-releases-opera-land-surface-disturbance-alert-version-1-data-product/), and this is specified when setting up the PySTAC client to search their catalog of data products in the above code cell.
-
-```python
-results = list(search.items_as_dicts())
-print(f"Number of tiles found intersecting given AOI: {len(results)}")
-```
-
-Let's load the search results into a pandas dataframe
-
-```python
-def search_to_df(results, layer_name = 'VEG-DIST-STATUS'):
-
- times = pd.DatetimeIndex([result['properties']['datetime'] for result in results]) # parse of timestamp for each result
- data = {'hrefs': [value['href'] for result in results for key, value in result['assets'].items() if layer_name in key], # parse out links only to DIST-STATUS data layer
- 'tile_id': [value['href'].split('/')[-1].split('_')[3] for result in results for key, value in result['assets'].items() if layer_name in key]}
-
- # Construct pandas dataframe to summarize granules from search results
- granules = pd.DataFrame(index=times, data=data)
- granules.index.name = 'times'
-
- return granules
-```
-
-```python
-granules = search_to_df(results)
-granules.head()
-
-```
-
-```python
-# Let's refine the dataframe a bit more so that we group together granules by
-# date of acquisition - we don't mind if they were acquired at different times
-# of the same day
-
-refined_granules = defaultdict(list)
-
-for i, row in granules.iterrows():
- refined_granules[i.strftime('%Y-%m-%d')].append(row.hrefs)
-
-refined_granules = pd.DataFrame(index=refined_granules.keys(), data = {'hrefs':refined_granules.values()})
-```
-
-```python
-# The wildfire near Alexandroupolis started on August 21st and rapidly spread, particularly affecting the nearby Dadia Forest
-# For demonstration purposes, let's look at three dates to study the extent of the fire -
-# August 1st, August 25th, and September 19th
-# We will plot the OPERA-DIST-ALERT data product, highlighting only those pixels corresponding to confirmed vegetation damage,
-# and in particular only those pixels where at least 50% of the area was affected (layer value 6)
-
-dates_of_interest = [datetime(year=2023, month=8, day=1), datetime(year=2023, month=8, day=26), datetime(year=2023, month=9, day=18)]
-hrefs_of_interest = [x.hrefs for i, x in refined_granules.iterrows() if datetime.strptime(i, '%Y-%m-%d') in dates_of_interest]
-```
-
-**Relevant layer Values for DIST-ALERT product:**
-
-* **0:** No disturbance
-* **1:** First detection of disturbance with vegetation cover change <50%
-* **2:** Provisional detection of disturbance with vegetation cover change <50%
-* **3:** Confirmed detection of disturbance with vegetation cover change <50%
-* **4:** First detection of disturbance with vegetation cover change >50%
-* **5:** Provisional detection of disturbance with vegetation cover change >50%
-* **6:** Confirmed detection of disturbance with vegetation cover change >50%
-
-```python
-# Define color map to generate plot (Red, Green, Blue, Alpha)
-colors = [(1, 1, 1, 0)] * 256 # Initial set all values to white, with zero opacity
-colors[6] = (1, 0, 0, 1) # Set class 6 to Red with 100% opacity
-
-# Create a ListedColormap
-cmap = ListedColormap(colors)
-```
-
-```python
-fig, ax = plt.subplots(1, 3, figsize = (30, 10))
-crs = None
-
-for i, (date, hrefs) in enumerate(zip(dates_of_interest, hrefs_of_interest)):
-
- # Read the crs to be used to generate basemaps
- if crs is None:
- with rasterio.open(hrefs[0]) as ds:
- crs = ds.crs
-
- if len(hrefs) == 1:
- with rasterio.open(hrefs[0]) as ds:
- raster = ds.read()
- transform = ds.transform
- else:
- raster, transform = merge(hrefs)
-
- show(raster, ax=ax[i], transform=transform, interpolation='none')
- cx.add_basemap(ax[i], crs=crs, zoom=9, source=cx.providers.OpenStreetMap.Mapnik)
- show(raster, ax=ax[i], transform=transform, interpolation='none', cmap=cmap)
-
- scalebar = AnchoredSizeBar(ax[i].transData,
- 10000 , '10 km', 'lower right',
- color='black',
- frameon=False,
- pad = 0.25,
- sep=5,
- fontproperties = {'weight':'semibold', 'size':12},
- size_vertical=300)
-
- ax[i].add_artist(scalebar)
- ax[i].ticklabel_format(axis='both', style='scientific',scilimits=(0,0),useOffset=False,useMathText=True)
- ax[i].set_xlabel('UTM easting (meters)')
- ax[i].set_ylabel('UTM northing (meters)')
- ax[i].set_title(f"Disturbance extent on: {date.strftime('%Y-%m-%d')}")
-```
-
-Next, let's calculate the area extent of damage over time
-
-```python
-damage_area = []
-conversion_factor = (30*1e-3)**2 # to convert pixel count to area in km^2; each pixel is 30x30 meters
-
-# this will take a few minutes to run, since we are retrieving data for multiple days
-for index, row in refined_granules.iterrows():
- raster, transform = merge(row.hrefs)
- damage_area.append(np.sum(raster==6)*conversion_factor)
-
-refined_granules['damage_area'] = damage_area
-
-```
-
-```python
-fig, ax = plt.subplots(1, 1, figsize=(20, 10))
-ax.plot([datetime.strptime(i, '%Y-%m-%d') for i in refined_granules.index], refined_granules['damage_area'], color='red')
-ax.grid()
-plt.ylabel('Area damaged by wildfire (km$^2$)', size=15)
-plt.xlabel('Date', size=15)
-plt.xticks([datetime(year=2023, month=8, day=1) + timedelta(days=6*i) for i in range(11)], size=14)
-plt.title('2023 Dadia forest wildfire detected extent', size=14)
-```
-
-### Great Green Wall, Sahel Region, Africa
-
-```python
-ndiaye_senegal = Point(-16.09, 16.50)
-
-# We will search data through the product record
-start_date = datetime(year=2022, month=1, day=1)
-stop_date = datetime.now()
-```
-
-```python
-# Plotting search location in folium as a sanity check
-m = folium.Map(location=(ndiaye_senegal.y, ndiaye_senegal.x), control_scale = True, zoom_start=9)
-radius = 5000
-folium.Circle(
- location=[ndiaye_senegal.y, ndiaye_senegal.x],
- radius=radius,
- color="red",
- stroke=False,
- fill=True,
- fill_opacity=0.6,
- opacity=1,
- popup="{} pixels".format(radius),
- tooltip="50 px radius",
- #
-).add_to(m)
-
-m
-```
-
-```python
-# We open a client instance to search for data, and retrieve relevant data records
-STAC_URL = 'https://cmr.earthdata.nasa.gov/stac'
-
-# Setup PySTAC client
-# LPCLOUD refers to the LP DAAC cloud environment that hosts earth observation data
-catalog = Client.open(f'{STAC_URL}/LPCLOUD/')
-
-collections = ["OPERA_L3_DIST-ANN-HLS_V1"]
-
-# We would like to search data for August-September 2023
-date_range = f'{start_date.strftime("%Y-%m-%d")}/{stop_date.strftime("%Y-%m-%d")}'
-
-opts = {
- 'bbox' : ndiaye_senegal.bounds,
- 'collections': collections,
- 'datetime' : date_range,
-}
-
-search = catalog.search(**opts)
-results = list(search.items_as_dicts())
-print(f"Number of tiles found intersecting given AOI: {len(results)}")
-```
-
-```python
-def urls_to_dataset(granule_dataframe):
- '''method that takes in a list of OPERA tile URLs and returns an xarray dataset with dimensions
- latitude, longitude and time'''
-
- dataset_list = []
-
- for i, row in granule_dataframe.iterrows():
- with rasterio.open(row.hrefs) as ds:
- # extract CRS string
- crs = str(ds.crs).split(':')[-1]
-
- # extract the image spatial extent (xmin, ymin, xmax, ymax)
- xmin, ymin, xmax, ymax = ds.bounds
-
- # the x and y resolution of the image is available in image metadata
- x_res = np.abs(ds.transform[0])
- y_res = np.abs(ds.transform[4])
-
- # read the data
- img = ds.read()
-
- # Ensure img has three dimensions (bands, y, x)
- if img.ndim == 2:
- img = np.expand_dims(img, axis=0)
-
-
-
- lon = np.arange(xmin, xmax, x_res)
- lat = np.arange(ymax, ymin, -y_res)
-
- lon_grid, lat_grid = np.meshgrid(lon, lat)
-
- da = xr.DataArray(
- data=img,
- dims=["band", "y", "x"],
- coords=dict(
- lon=(["y", "x"], lon_grid),
- lat=(["y", "x"], lat_grid),
- time=i,
- band=np.arange(img.shape[0])
- ),
- attrs=dict(
- description="OPERA DIST ANN",
- units=None,
- ),
- )
- da.rio.write_crs(crs, inplace=True)
-
- dataset_list.append(da)
- return xr.concat(dataset_list, dim='time').squeeze()
-
-dataset= urls_to_dataset(granules)
-```
diff --git a/book/07_Wildfire_analysis/Wildfire.md b/book/07_Wildfire_analysis/Wildfire.md
deleted file mode 100644
index 6cd0988..0000000
--- a/book/07_Wildfire_analysis/Wildfire.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Wildfire
-
-
diff --git a/book/08_Flood_analysis/3_Retrieving_FloodData.md b/book/08_Flood_analysis/3_Retrieving_FloodData.md
deleted file mode 100644
index 8cf6223..0000000
--- a/book/08_Flood_analysis/3_Retrieving_FloodData.md
+++ /dev/null
@@ -1,431 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.2
- kernelspec:
- display_name: Python 3 (ipykernel)
- language: python
- name: python3
----
-
-## Retrieving OPERA DSWx-HLS data for a flood event
-
-Heavy rains severely impacted Southeast Texas in May 2024 [[1]](https://www.texastribune.org/2024/05/03/texas-floods-weather-harris-county/), resulting in flooding and causing significant damage to property and human life [[2]](https://www.texastribune.org/series/east-texas-floods-2024/). In this notebook, we will retrieve [OPERA DSWx-HLS](https://d2pn8kiwq2w21t.cloudfront.net/documents/ProductSpec_DSWX_URS309746.pdf) data associated to understand the extent of flooding and damage, and visualize data from before, during, and after the event.
-
-```python
-import rasterio
-import rioxarray
-import folium
-
-import hvplot.xarray # noqa
-import xarray as xr
-import xyzservices.providers as xyz
-
-from shapely.geometry import Point
-from osgeo import gdal
-
-from holoviews.plotting.util import process_cmap
-
-import pandas as pd
-
-from warnings import filterwarnings
-filterwarnings("ignore") # suppress PySTAC warnings
-
-# STAC imports to retrieve cloud data
-from pystac_client import Client
-
-from datetime import datetime
-import numpy as np
-
-# GDAL setup for accessing cloud data
-gdal.SetConfigOption('GDAL_HTTP_COOKIEFILE','~/cookies.txt')
-gdal.SetConfigOption('GDAL_HTTP_COOKIEJAR', '~/cookies.txt')
-gdal.SetConfigOption('GDAL_DISABLE_READDIR_ON_OPEN','EMPTY_DIR')
-gdal.SetConfigOption('CPL_VSIL_CURL_ALLOWED_EXTENSIONS','TIF, TIFF')
-
-```
-
-```python
-# Let's set up the parameters of our search query
-
-# Flooding in Texas, 2024;
-livingston_tx = Point(-95.09, 30.69)
-
-# We will search data around the flooding event at the start of May
-start_date = datetime(year=2024, month=4, day=30)
-stop_date = datetime(year=2024, month=5, day=31)
-date_range = f'{start_date.strftime("%Y-%m-%d")}/{stop_date.strftime("%Y-%m-%d")}'
-
-# We open a client instance to search for data, and retrieve relevant data records
-STAC_URL = 'https://cmr.earthdata.nasa.gov/stac'
-
-# Setup PySTAC client
-# POCLOUD refers to the PO DAAC cloud environment that hosts earth observation data
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-
-# Setup PySTAC client
-provider_cat = Client.open(STAC_URL)
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-collections = ["OPERA_L3_DSWX-HLS_V1"]
-
-opts = {
- 'bbox' : livingston_tx.buffer(0.01).bounds,
- 'collections': collections,
- 'datetime' : date_range,
-}
-```
-
-
-```python
-livingston_tx = Point(-95.09, 30.69)
-m = folium.Map(location=(livingston_tx.y, livingston_tx.x), control_scale = True, zoom_start=8)
-radius = 15000
-folium.Circle(
- location=[livingston_tx.y, livingston_tx.x],
- radius=radius,
- color="red",
- stroke=False,
- fill=True,
- fill_opacity=0.6,
- opacity=1,
- popup="{} pixels".format(radius),
- tooltip="Livingston, TX",
- #
-).add_to(m)
-
-m
-```
-
-```python
-# Execute the search
-search = catalog.search(**opts)
-results = list(search.items_as_dicts())
-print(f"Number of tiles found intersecting given AOI: {len(results)}")
-```
-
-```python
-def search_to_df(results, layer_name='0_B01_WTR'):
- '''
- Given search results returned from a NASA earthdata query, load and return relevant details and band links as a dataframe
- '''
-
- times = pd.DatetimeIndex([result['properties']['datetime'] for result in results]) # parse of timestamp for each result
- data = {'hrefs': [value['href'] for result in results for key, value in result['assets'].items() if layer_name in key],
- 'tile_id': [value['href'].split('/')[-1].split('_')[3] for result in results for key, value in result['assets'].items() if layer_name in key]}
-
-
- # Construct pandas dataframe to summarize granules from search results
- granules = pd.DataFrame(index=times, data=data)
- granules.index.name = 'times'
-
- return granules
-```
-
-```python
-granules = search_to_df(results)
-granules.head()
-```
-
-```python
-# We now filter the dataframe to restrict our results to a single tile_id
-granules = granules[granules.tile_id == 'T15RTQ']
-granules.sort_index(inplace=True)
-```
-
-```python
-granules
-```
-
-```python
-type(results[0])
-
-```
-
-```python
-def filter_search_by_cc(results, cloud_threshold=10):
- '''
- Given a list of results returned by the NASA Earthdata STAC API for OPERA DSWx data,
- filter them by cloud cover
- '''
-
- filtered_results = []
-
- for result in results:
- try:
- cloud_cover = result['properties']['eo:cloud_cover']
- except:
- href = result['assets']['0_B01_WTR']['href']
- with rasterio.open(href) as ds:
- img = ds.read(1).flatten()
- cloud_cover = 100*(np.sum(np.isin(img, 253))/img.size)
-
- if cloud_cover <= cloud_threshold:
- filtered_results.append(result)
-
- return filtered_results
-```
-
-```python
-def urls_to_dataset(granule_dataframe):
- '''method that takes in a list of OPERA tile URLs and returns an xarray dataset with dimensions
- latitude, longitude and time'''
-
- dataset_list = []
-
- for i, row in granule_dataframe.iterrows():
- with rasterio.open(row.hrefs) as ds:
- # extract CRS string
- crs = str(ds.crs).split(':')[-1]
-
- # extract the image spatial extent (xmin, ymin, xmax, ymax)
- xmin, ymin, xmax, ymax = ds.bounds
-
- # the x and y resolution of the image is available in image metadata
- x_res = np.abs(ds.transform[0])
- y_res = np.abs(ds.transform[4])
-
- # read the data
- img = ds.read()
-
- # Ensure img has three dimensions (bands, y, x)
- if img.ndim == 2:
- img = np.expand_dims(img, axis=0)
-
-
-
- lon = np.arange(xmin, xmax, x_res)
- lat = np.arange(ymax, ymin, -y_res)
-
- lon_grid, lat_grid = np.meshgrid(lon, lat)
-
- da = xr.DataArray(
- data=img,
- dims=["band", "y", "x"],
- coords=dict(
- lon=(["y", "x"], lon_grid),
- lat=(["y", "x"], lat_grid),
- time=i,
- band=np.arange(img.shape[0])
- ),
- attrs=dict(
- description="OPERA DSWx B01",
- units=None,
- ),
- )
- da.rio.write_crs(crs, inplace=True)
-
- dataset_list.append(da)
- return xr.concat(dataset_list, dim='time').squeeze()
-
-dataset= urls_to_dataset(granules)
-```
-
-```python
-COLORS = [(150, 150, 150, 0)]*256
-COLORS[0] = (0, 255, 0, 1)
-COLORS[1] = (0, 0, 255, 1) # OSW
-COLORS[2] = (0, 0, 255, 1) # PSW
-COLORS[252] = (0, 0, 255, 1) # ICE
-COLORS[253] = (150, 150, 150, 1)
-```
-
-```python
-img = dataset.hvplot.quadmesh(title = 'DSWx data for May 2024 Texas floods',
- x='lon', y='lat',
- project=True, rasterize=True,
- cmap=COLORS,
- colorbar=False,
- widget_location='bottom',
- tiles = xyz.OpenStreetMap.Mapnik,
- xlabel='Longitude (degrees)',ylabel='Latitude (degrees)',
- fontscale=1.25, frame_width=1000, frame_height=1000)
-
-img
-```
-
-### Vaigai Reservoir
-
-```python
-vaigai_reservoir = Point(77.568, 10.054)
-```
-
-```python
-# Plotting search location in folium as a sanity check
-m = folium.Map(location=(vaigai_reservoir.y, vaigai_reservoir.x), control_scale = True, zoom_start=9)
-radius = 5000
-folium.Circle(
- location=[vaigai_reservoir.y, vaigai_reservoir.x],
- radius=radius,
- color="red",
- stroke=False,
- fill=True,
- fill_opacity=0.6,
- opacity=1,
- popup="{} pixels".format(radius),
- tooltip="50 px radius",
- #
-).add_to(m)
-
-m
-```
-
-```python
-# We will search data around the flooding event in December
-start_date = datetime(year=2023, month=4, day=1)
-stop_date = datetime(year=2024, month=4, day=1)
-date_range = f'{start_date.strftime("%Y-%m-%d")}/{stop_date.strftime("%Y-%m-%d")}'
-
-# We open a client instance to search for data, and retrieve relevant data records
-STAC_URL = 'https://cmr.earthdata.nasa.gov/stac'
-
-# Setup PySTAC client
-# POCLOUD refers to the PO DAAC cloud environment that hosts earth observation data
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-
-# Setup PySTAC client
-provider_cat = Client.open(STAC_URL)
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-collections = ["OPERA_L3_DSWX-HLS_V1"]
-
-# Setup search options
-opts = {
- 'bbox' : vaigai_reservoir.buffer(0.01).bounds,
- 'collections': collections,
- 'datetime' : date_range,
-}
-
-# Execute the search
-search = catalog.search(**opts)
-results = list(search.items_as_dicts())
-print(f"Number of tiles found intersecting given AOI: {len(results)}")
-```
-
-```python
-# let's filter our results so that only scenes with less than 10% cloud cover are returned
-results = filter_search_by_cc(results)
-
-print("Number of results containing less than 10% cloud cover: ", len(results))
-```
-
-```python
-# Load results into dataframe
-granules = search_to_df(results)
-```
-
-```python
-# This may take a while depending on the number of results we are loading
-dataset= urls_to_dataset(granules)
-```
-
-```python
-img = dataset.hvplot.quadmesh(title = 'Vaigai Reservoir, India - water extent over a year',
- x='lon', y='lat',
- project=True, rasterize=True,
- cmap=COLORS,
- colorbar=False,
- widget_location='bottom',
- tiles = xyz.OpenStreetMap.Mapnik,
- xlabel='Longitude (degrees)',ylabel='Latitude (degrees)',
- fontscale=1.25, frame_width=1000, frame_height=1000)
-
-img
-```
-
-### Lake Mead
-
-```python
-lake_mead = Point(-114.348, 36.423)
-```
-
-```python
-# Plotting search location in folium as a sanity check
-m = folium.Map(location=(lake_mead.y, lake_mead.x), control_scale = True, zoom_start=9)
-radius = 5000
-folium.Circle(
- location=[lake_mead.y, lake_mead.x],
- radius=radius,
- color="red",
- stroke=False,
- fill=True,
- fill_opacity=0.6,
- opacity=1,
- popup="{} pixels".format(radius),
- tooltip="50 px radius",
- #
-).add_to(m)
-
-m
-```
-
-```python
-# We will search data for a year
-start_date = datetime(year=2023, month=3, day=1)
-stop_date = datetime(year=2024, month=6, day=15)
-date_range = f'{start_date.strftime("%Y-%m-%d")}/{stop_date.strftime("%Y-%m-%d")}'
-
-# We open a client instance to search for data, and retrieve relevant data records
-STAC_URL = 'https://cmr.earthdata.nasa.gov/stac'
-
-# Setup PySTAC client
-# POCLOUD refers to the PO DAAC cloud environment that hosts earth observation data
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-
-# Setup PySTAC client
-provider_cat = Client.open(STAC_URL)
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-collections = ["OPERA_L3_DSWX-HLS_V1"]
-
-# Setup search options
-opts = {
- 'bbox' : lake_mead.buffer(0.01).bounds,
- 'collections': collections,
- 'datetime' : date_range,
-}
-
-# Execute the search
-search = catalog.search(**opts)
-results = list(search.items_as_dicts())
-print(f"Number of tiles found intersecting given AOI: {len(results)}")
-```
-
-```python
-# let's filter our results so that only scenes with less than 10% cloud cover are returned
-results = filter_search_by_cc(results)
-
-print("Number of results containing less than 10% cloud cover: ", len(results))
-```
-
-That's a fairly large number of tiles! Loading the data into an xarray dataset will take a few minutes.
-
-```python
-# Load results into dataframe
-granules = search_to_df(results)
-```
-
-```python
-# Let's filter by tile id so that we can study changes over the same spatial extent
-granules = granules[granules.tile_id=='T11SQA']
-```
-
-```python
-# Similarly, loading a year's worth of data will take a few minutes
-dataset= urls_to_dataset(granules)
-```
-
-```python
-img = dataset.hvplot.quadmesh(title = 'Lake Mead, NV USA - water extent over a year',
- x='lon', y='lat',
- project=True, rasterize=True,
- cmap=COLORS,
- colorbar=False,
- widget_location='bottom',
- tiles = xyz.OpenStreetMap.Mapnik,
- xlabel='Longitude (degrees)',ylabel='Latitude (degrees)',
- fontscale=1.25, frame_width=1000, frame_height=1000)
-
-img
-```
diff --git a/book/08_Flood_analysis/flood.md b/book/08_Flood_analysis/flood.md
deleted file mode 100644
index d2fa60c..0000000
--- a/book/08_Flood_analysis/flood.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# Flood
-
-
-
diff --git a/book/09_Scratch/2_Selecting_an_AOI.md b/book/09_Scratch/2_Selecting_an_AOI.md
deleted file mode 100644
index cf5dda5..0000000
--- a/book/09_Scratch/2_Selecting_an_AOI.md
+++ /dev/null
@@ -1,224 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.2
- kernelspec:
- display_name: nasa_topst
- language: python
- name: python3
----
-
-# Selecting an AOI
-
-Selecting and modifying Areas of Interest (AOIs) is an important part of geospatial data analysis workflows. The Python ecosystem of libraries provide a number ways to do this, some of which will be explored and demonstrated in this notebook. In particular, we will demonstrate the following:
-1. How to specify AOIs in different ways
-2. How to use `geopandas` to load shapely geometries, visualize them, and perform operations such as `intersection`
-3. Querying data providers using the AOIs defined, and understanding how query results can change based on AOI
-4. Perform windowing operations when reading raster data using `rasterio`
-
-This can be used to effectively query cloud services for datasets over specific regions,
-
-```python
-# library to handle filepath operations
-from pathlib import Path
-
-# library for handling geospatial data
-import rasterio
-from rasterio.plot import show
-import geopandas as gpd
-from shapely.geometry import Polygon, Point
-from pystac_client import Client
-
-# libraries to help with visualization
-import matplotlib.pyplot as plt
-from matplotlib.colors import ListedColormap
-from matplotlib import colors
-import folium
-
-# handle numbers
-import numpy as np
-
-
-# We set the following rasterio environment variables to read data from the cloud
-rio_env = rasterio.Env(
- GDAL_DISABLE_READDIR_ON_OPEN='EMPTY_DIR',
- CPL_VSIL_CURL_ALLOWED_EXTENSIONS="TIF, TIFF",
- GDAL_HTTP_COOKIEFILE=Path('~/cookies.txt').expanduser(),
- GDAL_HTTP_COOKIEJAR=Path('~/cookies.txt').expanduser()
- )
-rio_env.__enter__()
-```
-
-AOIs are `vector` data formats, because they refer to specific `points` or `polygons` that refer to the location of interest in a given co-ordinate reference system (CRS). For example, the city center of of [Tokyo, Japan](https://en.wikipedia.org/wiki/Tokyo) can be specified by the latitude and longitude pair (35.689722, 139.692222) in the [WGS84 CRS](https://en.wikipedia.org/wiki/World_Geodetic_System). In Python, we use the popular `shapely` library to define a vector shapes, as shown below:
-
-```python
-tokyo_point = Point(35.689722, 139.692222)
-```
-
-This code will generative an interactive plot - feel free to pan/zoom around!
-
-```python
-m = folium.Map(location=(tokyo_point.x, tokyo_point.y), control_scale = True, zoom_start=8)
-radius = 50
-folium.CircleMarker(
- location=[tokyo_point.x, tokyo_point.y],
- radius=radius,
- color="cornflowerblue",
- stroke=False,
- fill=True,
- fill_opacity=0.6,
- opacity=1,
- popup="{} pixels".format(radius),
- tooltip="50 px radius",
- #
-).add_to(m)
-
-m
-```
-
-AOIs can also take the form of bounds. Typically they are specified by four values - the minimum and maximum extent each in the `x` and `y` directions. For rasterio, these are specified in the format `(x_min, y_min, x_max, y_max)`, with values specified in the local CRS. We specify values in the local CRS.
-
-```python
-marrakesh_polygon = Polygon.from_bounds(-8.18, 31.42, -7.68, 31.92)
-
-# We will load the polygon into a geopandas dataframe for ease of plotting
-gdf = gpd.GeoDataFrame({'geometry':[marrakesh_polygon]}, crs='epsg:4326')
-```
-
-```python
-m = folium.Map(location=[31.62, -7.88], zoom_start=10)
-for _, row in gdf.iterrows():
- sim_geo = gpd.GeoSeries(row["geometry"]).simplify(tolerance=0.001)
- geo_j = sim_geo.to_json()
- geo_j = folium.GeoJson(data=geo_j, style_function=lambda x: {"fillColor": "orange"})
- # folium.GeoJson(data=.to_json(), style_function=lambda x: {"fillColor": "orange"})
- geo_j.add_to(m)
-
-m
-```
-
-Geopandas dataframes require a `geometry` column containing `shapely` shapes (`Points`, `Polygons`, etc.) and also require a `CRS` to be specified to work and render correctly. In this example, we specify `EPSG:4326` as our CRS, which corresponds to WGS84 system, which refers to locations on the globe using `(latitude, longitude)` pair values.
-
-Let's add another polygon to the above example, and also see how to calculate their intersection:
-
-```python
-marrakesh_polygon = Polygon.from_bounds(-8.18, 31.42, -7.68, 31.92) # Original polygon
-marrakesh_polygon_2 = Polygon.from_bounds(-8.38, 31.22, -7.68, 31.52) # Arbitrary second overlapping polygon
-intersection_polygon = marrakesh_polygon.intersection(marrakesh_polygon_2) # Calculate the intersection of polygons
-
-# We will load the polygon into a geopandas dataframe for ease of plotting
-gdf = gpd.GeoDataFrame({'name':['Original Polygon', 'New Polygon', 'Intersection Area'], # Add some text that will appear when you hover over the polygono
- 'color':['blue', 'orange', 'red'], # Unique colors for each shape
- 'geometry':[marrakesh_polygon, marrakesh_polygon_2, intersection_polygon]}, # column of geometries
- crs='epsg:4326') # CRS for the dataframe, which must be common to all the shapes
-
-m = folium.Map(location=[31.62, -7.88], zoom_start=10)
-for _, row in gdf.iterrows():
- sim_geo = gpd.GeoSeries(row["geometry"]).simplify(tolerance=0.001)
- geo_j = sim_geo.to_json()
- geo_j = folium.GeoJson(data=geo_j, fillColor=row['color'], tooltip=row["name"])
- geo_j.add_to(m)
-
-m
-```
-
-Let us now query a DAAC for data over a new region. We will be going over the details of the query in the next chapter, but will simply see an example of data querying here. First, let's look at the region in a folium map
-
-```python
-lake_mead_polygon = Polygon.from_bounds(-114.52, 36.11,-114.04, 36.48)
-
-# We will load the polygon into a geopandas dataframe for ease of plotting
-gdf = gpd.GeoDataFrame({'geometry':[lake_mead_polygon]}, crs='epsg:4326')
-
-m = folium.Map(location=[36.11, -114.5], zoom_start=10)
-for _, row in gdf.iterrows():
- sim_geo = gpd.GeoSeries(row["geometry"]).simplify(tolerance=0.001)
- geo_j = sim_geo.to_json()
- geo_j = folium.GeoJson(data=geo_j, style_function=lambda x: {"fillColor": "orange"})
- geo_j.add_to(m)
-
-m
-```
-
-```python
-# URL of CMR service
-STAC_URL = 'https://cmr.earthdata.nasa.gov/stac'
-
-# Setup PySTAC client
-provider_cat = Client.open(STAC_URL)
-catalog = Client.open(f'{STAC_URL}/POCLOUD/')
-collections = ["OPERA_L3_DSWX-HLS_V1"]
-
-# We would like to search data for April 2023
-date_range = "2023-04-01/2023-04-30"
-
-opts = {
- 'bbox' : lake_mead_polygon.bounds,
- 'collections': collections,
- 'datetime' : date_range,
-}
-
-search = catalog.search(**opts)
-```
-
-```python
-print("Number of tiles found intersecting given polygon: ", len(list(search.items())))
-```
-
-How many search results did you get? What happens if you modify the date range in the previous cell and re-run the search? Note: if you make the time window too large, it will take a while for results to return.
-
-Lastly, let's visualize some of the returned data. Here's a sample returned search result - you can click on the keys and see the data contained in them:
-
-```python
-sample_result = list(search.items())[0]
-sample_result
-```
-
-```python
-data_url = sample_result.assets['0_B01_WTR'].href
-```
-
-```python
-with rasterio.open(data_url) as ds:
- img = ds.read(1)
- cmap = ds.colormap(1)
- profile = ds.profile
-cmap = ListedColormap([np.array(cmap[key]) / 255 for key in range(256)])
-```
-
-```python
-fig, ax = plt.subplots(1, 1, figsize=(10, 10))
-im = show(img, ax=ax, transform=profile['transform'], cmap=cmap, interpolation='none')
-
-ax.set_xlabel("Eastings (meters)")
-ax.set_ylabel("Northings (meters)")
-ax.ticklabel_format(axis='both', style='scientific',scilimits=(0,0),useOffset=False,useMathText=True)
-
-bounds = [0, 1, 2, 3,
- 251, 252, 253,
- ]
-
-im = im.get_images()[0]
-
-cbar=fig.colorbar(im,
- ax=ax,
- shrink=0.5,
- pad=0.05,
- boundaries=bounds,
- cmap=cmap,
- ticks=[0.5, 1.5, 2.5, 251.5, 252.5])
-
-cbar.ax.tick_params(labelsize=8)
-norm = colors.BoundaryNorm(bounds, cmap.N)
-cbar.set_ticklabels(['Not Water',
- 'Open Water',
- 'Partial Surface Water',
- 'HLS Snow/Ice',
- 'HLS Cloud/Cloud Shadow',
- ],
- fontsize=7)
-```
diff --git a/book/09_Scratch/4_Analyzing_Datasets.md b/book/09_Scratch/4_Analyzing_Datasets.md
deleted file mode 100644
index b48644d..0000000
--- a/book/09_Scratch/4_Analyzing_Datasets.md
+++ /dev/null
@@ -1,2 +0,0 @@
-# Time-series analysis of datasets
-Now that we are able to identify and retrieve datasets of interest, we can perform a simple time series analysis of our dataset - for example, we will try to answer the question, "How does the amount of water present in our scene change over time?"
\ No newline at end of file
diff --git a/book/09_Scratch/5_Manipulating_and_Visualizing_Datasets.md b/book/09_Scratch/5_Manipulating_and_Visualizing_Datasets.md
deleted file mode 100644
index c13e5bd..0000000
--- a/book/09_Scratch/5_Manipulating_and_Visualizing_Datasets.md
+++ /dev/null
@@ -1,2 +0,0 @@
-# Manipulating and Visualizing datasets
-AOIs may span multiple rasters, and in such situations it is useful to `mosaic` them together to obtain a single continuous image. We also show how to visualize raster images by using `rasterio`'s in-built plotting routines, as well as using additional python libraries such as `matplotlib` to generate scalebars.
\ No newline at end of file
diff --git a/book/09_Scratch/SLIDES-NASA-TOPS-flood-EN.md b/book/09_Scratch/SLIDES-NASA-TOPS-flood-EN.md
deleted file mode 100644
index 3bc1080..0000000
--- a/book/09_Scratch/SLIDES-NASA-TOPS-flood-EN.md
+++ /dev/null
@@ -1,251 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.2
----
-
-
-# Analyzing Flood Risk Reproducibly with NASA Earthdata Cloud
-
-
-

-
ScienceCore:
Climate Risk
-
-
-
-
-
-
-
-## Objective
-
-Utilize NASA's open products called Opera Dynamic Surface Water eXtent (DSWx) - Harmonized Landsat Sentinel-2 (HLS) to map the extent of flooding resulting from the September 2022 monsoon event in Pakistan.
-
-
-
-
-In 2022, Pakistan's monsoon rains reached record levels, causing devastating floods and landslides that affected all four provinces of the country and around 14% of its population. In this example, you will see how to use NASA's open products Opera DSWx-HLS to map the extent of flooding caused by the monsoon in September 2022 in Pakistan.
-
-
-
-
-## Roadmap
-
-- Opera DSWx-HLS Products
-- Set up the working environment
-- Define the area of interest
-- Data search and retrieval
-- Data analysis
-- Processing and visualization
-
-
-
-First, we will define what Opera DSWx-HLS products are and what kind of information you can obtain from them.
-Then, you will learn to set up your working environment, define the area of interest you want to gather information about, perform data search and retrieval, analyze and visualize it.
-
-
-
-
-## The Expected Outcome
-
--- Insert resulting image --
-
-
-
-This is the final result you will achieve after completing the workshop activities.
-
-
-
-## Before We Start
-
-- To participate in this class, you must accept the coexistence guidelines detailed [here]().
-- If you speak, then mute your microphone to avoid interruptions from background noise. We might do it for you.
-- To say something, request the floor or use the chat.
-- Can we record the chat? Can we take pictures?
-
-
-
-
-Coexistence guidelines:
-- If you're in this course, you've accepted the coexistence guidelines of our community, which broadly means we'll behave in a polite and friendly manner to make this an open, safe, and friendly environment, ensuring the participation of all individuals in our virtual activities and spaces.
-- If any of you see or feel that you're not comfortable enough, you can write to us privately.
-- In case those of us who don't make you feel comfortable are --teachers-- you can indicate it by sending an email to --add reference email--
-How to participate:
-- We're going to ask you to mute/turn off your microphones while you're not speaking so that the ambient sound from each of us doesn't bother us.
-- You can request the floor by raising your hand or in the chat, and --teachers-- will be attentive so that you can participate at the right time.
-About recording:
-- The course will be recorded, if you don't want to appear in the recording, we ask you to turn off the camera.
-- If any of you want to share what we're doing on social media, please, before taking a photo or screenshot with the faces of each of the people present, ask for permission because some people may not feel comfortable sharing their image on the internet. There are no problems in sharing images of the slides or --the teacher's face--.
-
-
-
-
-
-## Opera DSWx-HLS Dataset
-
-- Contains observations of surface water extent at specific locations and times (from February 2019 to September 2022).
-- Distributed over projected map coordinates as mosaics.
-- Each mosaic covers an area of 109.8 x 109.8 km.
-- Each mosaic includes 10 GeoTIFF (layers).
-
-
-
-
-This dataset contains observations of surface water extent at specific locations and times spanning from February 2019 to September 2022. The input dataset for generating each product is the Harmonized Landsat-8 and Sentinel-2A/B (HLS) product version 2.0. HLS products provide surface reflectance (SR) data from the Operational Land Imager (OLI) aboard the Landsat 8 satellite and the MultiSpectral Instrument (MSI) aboard the Sentinel-2A/B satellite.
-
-Surface water extent products are distributed over projected map coordinates. Each UTM mosaic covers an area of 109.8 km × 109.8 km. This area is divided into 3,660 rows and 3,660 columns with a pixel spacing of 30 m.
-
-Each product is distributed as a set of 10 GeoTIFF files (layers) including water classification, associated confidence, land cover classification, terrain shadow layer, cloud/cloud-shadow classification, Digital Elevation Model (DEM), and Diagnostic layer in PNG format.
-
-
-
-
-
-## Opera DSWx-HLS Dataset
-
-1. B02_BWTR (Water binary layer):
- - 1 (white) = presence of water.
- - 0 (black) = absence of water.
-
-2. B03_CONF (Confidence layer):
- - % confidence in its water predictions.
-
-
-
-In this workshop, we will use two layers:
-1. **B02_BWTR (Water binary layer):**
-This layer provides us with a simple image of flooded areas. Where there is water, the layer is valued at 1 (white), and where there is no water, it takes a value of 0 (black). It's like a binary map of floods, ideal for getting a quick overview of the disaster's extent.
-2. **B03_CONF (Confidence layer):**
-This layer indicates how confident the DSWx-HLS system is in its water predictions. Where the layer shows high values (near 100%), we can be very sure that there is water. In areas with lower values, confidence decreases, meaning that what appears to be water could be something else, such as shadows or clouds.
-
-To help you better visualize how this works, think of a satellite image of the flood-affected area. Areas with water appear dark blue, while dry areas appear brown or green.
-The water binary layer (B02_BWTR), overlaid on the image, would shade all blue areas white, creating a simple map of water yes/no.
-In contrast, the confidence layer (B03_CONF) would function as a transparency overlaid on the image, with solid white areas where confidence is high and increasing transparency towards black where confidence is low. This allows you to see where the DSWx-HLS system is most confident that its water predictions are correct.
-By combining these layers, scientists and humanitarian workers can get a clear picture of the extent of floods and prioritize rescue and recovery efforts.
-
-
-
-
-## Set Up the Working Environment
-
-TO BE COMPLETED.
-
-
-
-THIS NEEDS TO BE DEFINED BASED ON NOTEBOOK MODIFICATIONS.
-
-
-
-## Live Coding: Let's go to notebook XXXXX.
-
-
-
-## Selection of the Area of Interest (AOI)
-
-- Initialize user-defined parameters.
-- Perform a specific data search on NASA.
-- Search for images within the DSWx-HLS collection that match the AOI.
-
-
-
-Next, you will learn:
-
-1. How to initialize user-defined parameters:
-
-* Define the search area: Draw a rectangle on the map to indicate the area where you want to search for data.
-* Set the search period: Mark the start and end dates to narrow down the results to a specific time range.
-* Display parameters: Print on the screen the details of the search area and the chosen dates so you can verify them.
-
-2. Perform a specific data search on NASA:
-
-* Connect to the database: Link to NASA's CMR-STAC API to access its files.
-* Specify the collection: Indicate that you want to search for data from the "OPERA_L3_DSWX-HLS_PROVISIONAL_V0" collection.
-* Perform the search: Filter the results according to the search area, dates, and a maximum limit of 1000 results.
-
-3. Search for images (from the DSWx-HLS collection) that match the area of interest:
-
-* Measure overlap: Calculate how much each image overlaps with the area you are interested in.
-* Show percentages: Print these percentages on the screen so you can see the coverage.
-* Filter images: Select only those with an overlap greater than a set limit.
-
-
-
-
-
-## Live Coding: Let's go to notebook XXXXX.
-
-
-
-## Activity 1:
-
-Modify the XXX parameters to define a new area of interest.
-
-
-
-## Data Search and Retrieval
-
-- Transform filtered data into a list.
-- Display details of the first result:
- - Count the results.
- - Show overlap.
- - Indicate cloudiness.
-
-
-
-In the following section, you'll learn:
-
-1. How to transform filtered results into a list to work with them more easily.
-2. How to display details of the first result to see what information it contains.
- - Count the results: how many files were found after applying the filters.
- - Show overlap: how much each file overlaps with the area you're looking for, so you know how well they cover the area.
- - Indicate cloudiness: amount of clouds in each file before filtering, so you can consider if cloud coverage is an important factor for you.
-
-
-
-
-
-
-## Live Coding: Let's go to notebook XXXX.
-
-
-
-## Activity 2:
-
-TO BE DEFINED
-
-
-
-## Data Analysis
-
-TO BE DEFINED
-
-
-
-## Live Coding: Let's go to notebook XXXX.
-
-
-
-## Activity 3:
-
-TO BE DEFINED
-
-
-
-## Processing and Visualization
-
-TO BE DEFINED
-
-
-
-## Live Coding: Let's go to the notebook XXXX.
-
-
-
-## Activity 4:
-
-TO BE DEFINED
-
diff --git a/book/09_Scratch/SLIDES-NASA-TOPS-flood-ES.md b/book/09_Scratch/SLIDES-NASA-TOPS-flood-ES.md
deleted file mode 100644
index 7d9d881..0000000
--- a/book/09_Scratch/SLIDES-NASA-TOPS-flood-ES.md
+++ /dev/null
@@ -1,251 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.2
----
-
-
-# Analizando de manera reproducible el riesgo de inundaciones con NASA Earthdata cloud.
-
-
-

-
ScienceCore:
Climate Risk
-
-
-
-
-
-
-
-## Objetivo
-
-Utilizar los productos abiertos de la NASA llamados Opera Dynamic Surface Water eXtent (DSWx) - Landsat Sentinel-2 armonizado (HLS) para mapear la extensión de la inundación como resultado del evento monzónico de septiembre de 2022 en Pakistán.
-
-
-
-
-En 2022, las lluvias monzónicas de Pakistán alcanzaron niveles récord, provocando devastadoras inundaciones y deslizamientos de tierra que afectaron a las cuatro provincias del país y a alrededor del 14% de su población. En este ejemplo, podrás ver cómo utilizar los productos abiertos de NASA Opera DSWx-HLS para mapear la extensión de las inundaciones causadas por el monzón ocurrido en septiembre de 2022 en Pakistán.
-
-
-
-
-## Hoja de Ruta
-
-- Productos Opera DSWx-HLS
-- Configurar el ambiente de trabajo
-- Definir el área de interés
-- Búsqueda y obtención de datos
-- Análisis de datos
-- Procesamiento y visualización
-
-
-
-Primero, definiremos qué son los productos Opera DSWx-HLS y qué tipo de información puedes obtener de ellos.
-Luego, aprenderás a configurarás tu ambiente de trabajo, definir el área de interés sobre la que quieres recolectar información, realizar la búsqueda y recolección de datos, analizaros y visualizarlos.
-
-
-
-
-## Qué te llevarás
-
--- Insertar imagen resultante --
-
-
-
-Este es el resultado final al que llegarás, luego de completar las actividades del taller.
-
-
-
-## Antes de empezar
-
-- Para participar de esta clase, debes aceptar las pautas de convivencia detalladas [aquí]().
-- Si hablas, luego silencia tu micrófono para evitar interrupciones por ruidos de fondo. Puede que lo hagamos por ti.
-- Para decir algo, pide la palabra o usa el chat.
-- ¿Podemos grabar el chat? ¿Podemos “sacar fotos”?
-
-
-
-
-Pautas de convencia:
-- Si están en este curso aceptaron las pautas de convivencia de nuestra comunidad el cuál implica, a grandes rasgos, que nos vamos a comportar de forma educada y amable para que este sea un ambiente abierto, seguro y amigable y garantizar la participación de todas las personas en nuestras actividades y espacios virtuales.
-- Si alguno de ustedes ve o siente que no está lo suficientemente cómodo o cómoda, nos puede escribir a nosotros por mensajes privados.
-- En caso de que quienes no los hagamos sentir cómodo seamos --docentes-- lo pueden indicar enviando un mail a --agregar mail de referencia --
-Cómo participar:
-- Vamos a pedirles que se silencien/apaguen los micrófonos mientras no están hablando para que no nos moleste el sonido ambiente de cada uno de nosotros.
-- Pueden pedir la palabra levantando la mano o en el chat y --docentes-- vamos a estar atentos para que puedan participar en el momento adecuado.
-Acerca de la grabación:
-- El curso va a grabarse, si no desean aparecer en la grabación les pedimos que apaguen la camara.
-- Si alguno de ustedes quiere contar lo que estamos haciendo en redes sociales, por favor, antes de sacar una foto o captura de pantalla con las caras de cada una de las personas que están presentes, pidamos permiso porque puede haber gente que no se sienta cómoda compartiendo su imagen en internet. No hay inconvenientes en que compartan imágenes de las diapositivas o --la cara del docente--.
-
-
-
-
-
-## Dataset Opera DSWx-HLS
-
-- Contiene observaciones de la extensión superficial de agua en ubicaciones y momentos específicos (de febrero de 2019 hasta septiembre de 2022).
-- Se distribuyen sobre coordenadas de mapa proyectadas como mosaicos.
-- Cada mosaico cubre un área de 109.8 x 109.8 km.
-- Cada mosaico incluye 10 GeoTIFF (capas).
-
-
-
-
-Este conjunto de datos contiene observaciones de la extensión superficial de agua en ubicaciones y momentos específicos que abarcan desde febrero de 2019 hasta septiembre de 2022. El conjunto de datos de entrada para generar cada producto es el producto Harmonized Landsat-8 y Sentinel-2A/B (HLS) versión 2.0. Los productos HLS proporcionan datos de reflectancia de superficie (SR) del Operador de Imágenes Terrestres (OLI) a bordo del satélite Landsat 8 y del Instrumento Multiespectral (MSI) a bordo de los satélites Sentinel-2A/B.
-
-Los productos de extensión superficial de agua se distribuyen sobre coordenadas de mapa proyectadas. Cada mosaico UTM cubre un área de 109,8 km × 109,8 km. Esta área se divide en 3,660 filas y 3,660 columnas con un espaciado de píxeles de 30 m.
-
-Cada producto se distribuye como un conjunto de 10 archivos GeoTIFF (capas) que incluyen clasificación de agua, confianza asociada, clasificación de cobertura terrestre, capa de sombra del terreno, clasificación de nubes/sombras de nubes, Modelo Digital de Elevación (DEM) y capa diagnóstica en formato PNG.
-
-
-
-
-
-## Dataset Opera DSWx-HLS
-
-1. B02_BWTR (Capa binaria de agua):
- - 1 (blanco) = presencia de agua.
- - 0 (negro) = ausencia de agua.
-
-2. B03_CONF (Capa de confianza):
- - % de confianza en sus predicciones de agua.
-
-
-
-En este taller, utilizaremos dos capas:
-1. **B02_BWTR (Capa binaria de agua):**
-Esta capa nos brinda una imagen simple de las áreas inundadas. Donde hay agua, la capa vale 1 (blanco) y donde no hay agua, toma valor 0 (negro). Es como un mapa binario de las inundaciones, ideal para obtener una visión general rápida de la extensión del desastre.
-2. **B03_CONF (Capa de confianza):**
-Esta capa nos indica qué tan seguro está el sistema DSWx-HLS de sus predicciones de agua. Donde la capa muestra valores altos (cerca de 100%), podemos estar muy seguros de que hay agua. En áreas con valores más bajos, la confianza disminuye, lo que significa que lo que parece agua podría ser otra cosa, como sombras o nubes.
-
-Para ayudarte a visualizar mejor cómo funciona esto piensa en una imagen satelital de la zona afectada por las inundaciones. Las áreas con agua se ven azul oscuro, mientras que las áreas secas se ven de color marrón o verde.
-La capa binaria de agua (B02_BWTR), superpuesta sobre la imagen, sombrearía de blanco todas las áreas azules, creando un mapa simple de agua sí/no.
-En cambio, la capa de confianza (B03_CONF) funcionaría como una transparencia superpuesta sobre la imagen, con áreas blancas sólidas donde la confianza es alta y transparencia creciente hacia el negro donde la confianza es baja. Esto te permite ver dónde el sistema DSWx-HLS está más seguro de que sus predicciones de agua son correctas.
-Al combinar estas capas, los científicos y los trabajadores humanitarios pueden obtener una imagen clara de la extensión de las inundaciones y priorizar los esfuerzos de rescate y recuperación.
-
-
-
-
-## Configurar el ambiente de trabajo
-
-A COMPLETAR.
-
-
-
-ESTO HAY QUE DEFINIRLO A PARTIR DE LA MODIFICACIÓN DE LAS NOTEBOOKS.
-
-
-
-## Live Coding: Vamos a la notebook XXXXX.
-
-
-
-## Selección del área de interés (AOI)
-
-- Inicializar parámetros definidos por el usuario.
-- Relizar una búsqueda de datos específicos en la NASA.
-- Buscar imágenes dentro de colección DSWx-HLS que coincidan con el AOI.
-
-
-
-A continuación, aprenderás a:
-
-1. Inicializar parámetros definidos por el usuario:
-
-* Define la zona de búsqueda: Dibuja un rectángulo en el mapa para indicar el área donde quieres buscar datos.
-* Establece el periodo de búsqueda: Marca la fecha de inicio y de fin para acotar los resultados a un rango de tiempo específico.
-* Muestra los parámetros: Imprime en la pantalla los detalles de la zona de búsqueda y las fechas elegidas para que puedas verificarlos.
-
-2. Realizar la búsqueda de datos específicos en la NASA:
-
-* Se conecta a la base de datos: Enlaza con la API CMR-STAC de la NASA para poder acceder a sus archivos.
-* Especifica la colección: Indica que quiere buscar datos de la colección "OPERA_L3_DSWX-HLS_PROVISIONAL_V0".
-* Realiza la búsqueda: Filtra los resultados según la zona de búsqueda, las fechas y un límite máximo de 1000 resultados.
-
-3. Buscar imágenes (de la colección DSWx-HLS) que coincidan con el área de interés:
-
-* Mide la superposición: Calcula cuánto se solapa cada imagen con el área que te interesa.
-* Muestra los porcentajes: Imprime en la pantalla estos porcentajes para que puedas ver la cobertura.
-* Filtra las imágenes: Selecciona solo aquellas que tengan una superposición mayor a un límite establecido.
-
-
-
-
-
-## Live Coding: Vamos a la notebook XXXXX.
-
-
-
-## Actividad 1:
-
-Modifica los parámetros XXX para definir una nueva área de interés.
-
-
-
-## Búsqueda y obtención de datos
-
-- Transformar los datos filtrados en una lista.
-- Mostrar los detalles del primer resultado:
- - Contar los resultados.
- - Mostrar superposición.
- - Indicar nubosidad.
-
-
-
-En la siguiente sección aprenderás:
-
-1. Cómo transformar los resultados filtrados en una lista para poder trabajar con ellos más fácilmente.
-2. Cómo mostrar los detalles del primer resultado para ver cómo es la información que contiene.
- - Contar los resultado: cuántos archivos se encontraron después de aplicar los filtros.
- - Mostrar la superposición: cuánto se solapa cada archivo con la zona que buscas, para que sepas qué tan bien cubren el área.
- - Indicar la nubosidad: cantidad de nubes que había en cada archivo antes de filtrarlo, para que puedas considerar si la cobertura de nubes es un factor importante para ti.
-
-
-
-
-
-
-## Live Coding: Vamos a la notebook XXXX.
-
-
-
-## Actividad 2:
-
-A DEFINIR
-
-
-
-## Análisis de datos
-
-A DEFINIR
-
-
-
-## Live Coding: Vamos a la notebook XXXX.
-
-
-
-## Actividad 3:
-
-A DEFINIR
-
-
-
-## Procesamiento y visualización
-
-A DEFINIR
-
-
-
-## Live Coding: Vamos a la notebook XXXX.
-
-
-
-## Actividad 4:
-
-A DEFINIR
-
diff --git a/book/09_Scratch/drought.md b/book/09_Scratch/drought.md
deleted file mode 100644
index 71a4f4c..0000000
--- a/book/09_Scratch/drought.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Drought
-
-
diff --git a/book/09_Scratch/notebooks/2_ES_Flood.md b/book/09_Scratch/notebooks/2_ES_Flood.md
deleted file mode 100644
index 0973c93..0000000
--- a/book/09_Scratch/notebooks/2_ES_Flood.md
+++ /dev/null
@@ -1,605 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.2
- kernelspec:
- display_name: opera_app_dev
- language: python
- name: python3
----
-
-
-# **Introducción a la generación de mapas de inundaciones utilizando datos de teledetección**
-**Guía para principiantes:** Este tutorial te enseñará cómo consultar y trabajar con los datos provisionales de OPERA DSWx-HLS desde la nube . Para conocer más sobre los datos utilizados en esta notebook, podes acceder a [OPERA_L3_DSWX-HLS_PROVISIONAL_V0](https://dx.doi.org/10.5067/OPDSW-PL3V0) )
-
-
-
-
-
-
-
-
-
-**Cómo empezar con los mapas de agua utilizando el dataset de OPERA DSWx-HLS:**
-Esta guía te muestrará cómo explorar los cambios en el agua en todo el mundo utilizando herramientas basadas en la nube. Usaremos OPERA DSWx-HLS, un conjunto de datos, obtenidos mediante teledetección, que rastrea la extensión de agua desde febrero de 2019 hasta septiembre de 2022 (ACTUALIZAR RANGO DE FECHA).
-
-**1. Conectándote a la información desde la nube:**
-
-Accederás a imágenes optimizadas de la Tierra (llamadas COG) directamente desde la nube, sin descargas pesadas.
-Usarás un catálogo espacial súper práctico, denominado Catálogo de Activos Espaciales y Temporales de CMR (CMR-STAC), para encontrar las imágenes que necesitas, como si buscaras un libro en una biblioteca.
-
-**2. Explorando los datos de los productos OPERA DSWx-HLS:**
-
-Trabajarás con imágenes provisionales de la extensión de agua en la superficie terrestre (OPERA_L3_DSWX-HLS_PROVISIONAL_V0), recopiladas entre febrero de 2019 y septiembre de 2022, ¡una buena cantidad de información! (ACTUALIZAR RANGO DE FECHA). Dichas imágenes combinan lo mejor de dos satélites: Landsat 8 y Sentinel-2A/B, para una visión más completa.
-Además, tendrás acceso a 10 capas de información por imagen, incluyendo clasificación de agua, confianza en los datos, cobertura del suelo, sombras del terreno, nubes, ¡y más!
-
-**3. Visualizando los datos a tu gusto:**
-
-Aprenderás a visualizar estas imágenes de la forma que más te convenga para analizarlas.
-
-
-**Inundaciones en Pakistán con DSWx-HLS: Un ejemplo práctico**
-
-En 2022, las lluvias monzónicas de Pakistán alcanzaron niveles récord, provocando devastadoras inundaciones y deslizamientos de tierra que afectaron a las cuatro provincias del país y a alrededor del 14% de su población [CDP]. En este ejemplo, te mostramos cómo el sistema DSWx-HLS puede usarse para mapear la extensión de las inundaciones causadas por el monzón en septiembre de 2022.
-
-Capas del conjunto de datos científico (SDS):
-DSWx-HLS nos brinda capas de información que nos permiten visualizar y analizar la situación con mayor detalle.
-1. **B02_BWTR (Capa binaria de agua):**
-Esta capa nos brinda una imagen simple de las áreas inundadas. Donde hay agua, la capa vale 1 (blanco) y donde no hay agua, toma valor 0 (negro). Es como un mapa binario de las inundaciones, ideal para obtener una visión general rápida de la extensión del desastre.
-2. **B03_CONF (Capa de confianza):**
-Esta capa nos indica qué tan seguro está el sistema DSWx-HLS de sus predicciones de agua. Donde la capa muestra valores altos (cerca de 100%), podemos estar muy seguros de que hay agua. En áreas con valores más bajos, la confianza disminuye, lo que significa que lo que parece agua podría ser otra cosa, como sombras o nubes.
-
-Para ayudarte a visualizar mejor cómo funciona esto piensa en una imagen satelital de la zona afectada por las inundaciones. Las áreas con agua se ven azul oscuro, mientras que las áreas secas se ven de color marrón o verde.
-La capa binaria de agua (B02_BWTR), superpuesta sobre la imagen, sombrearía de blanco todas las áreas azules, creando un mapa simple de agua sí/no.
-En cambio, la capa de confianza (B03_CONF) funcionaría como una transparencia superpuesta sobre la imagen, con áreas blancas sólidas donde la confianza es alta y transparencia creciente hacia el negro donde la confianza es baja. Esto te permite ver dónde el sistema DSWx-HLS está más seguro de que sus predicciones de agua son correctas.
-Al combinar estas capas, los científicos y los trabajadores humanitarios pueden obtener una imagen clara de la extensión de las inundaciones y priorizar los esfuerzos de rescate y recuperación.
-
-**Recuerda:**
-DSWx-HLS es una herramienta poderosa, pero los datos no son perfectos. Siempre es importante tener en cuenta la capa de confianza al interpretar los resultados.
-Este es solo un ejemplo de cómo se puede usar DSWx-HLS para mapear las inundaciones. Hay muchas otras aplicaciones potenciales para este sistema.
-
-**Qué necesitas:**
-- Una computadora con acceso a Internet
-(Opcional)
-- Conocimientos básicos de mapas e imágenes satelitales
-
-**Bono:** Para obtener más detalles técnicos, consulta el documento de especificación del [producto OPERA](https://d2pn8kiwq2w21t.cloudfront.net/documents/ProductSpec_DSWX_URS309746.pdf).
-
-
-
-
-
-# **¡Antes de empezar!**
-
-Para estar preparado paa el tutorial y aprovechar al máximo, por favor revisa la sección` 1_Primeros pasos.md.`
-
-
-
-
-
-
-## **Parte 1: Setear el Ambiente de Trabajo**
-
-
-
-
-
-### **1.1 Importar Paquetes**
-
-
-
-El código Python %load_ext autoreload y %autoreload 2 habilitan la recarga automática de módulos en un cuaderno de Jupyter. Esto significa que si modifica un módulo que ha importado en su cuaderno, los cambios se reflejarán automáticamente en su cuaderno sin tener que reiniciarlo.
-
-
-
-En la siguiente sección se importa una variedad de bibliotecas y herramientas que permiten:
-* Obtener datos geoespaciales de diferentes fuentes.
-* Procesar y analizar estos datos.
-* Crear visualizaciones estáticas e interactivas para explorar y comunicar los resultados.
-
-
-
-
-```python id="4-SnwjZJ0CSF" outputId="884b26b5-08a3-45d3-9570-1a27e6a1dd2c"
-import os
-from netrc import netrc
-from subprocess import Popen
-from platform import system
-from getpass import getpass
-
-from pystac_client import Client
-from pystac_client import ItemSearch
-
-import json
-
-import matplotlib.pyplot as plt
-from matplotlib import cm
-from datetime import datetime
-from tqdm import tqdm
-
-from shapely.geometry import box
-from shapely.geometry import shape
-from shapely.ops import transform
-
-import numpy as np
-import pandas as pd
-import geopandas as gpd
-from skimage import io
-
-from osgeo import gdal
-from rioxarray.merge import merge_arrays
-
-import pyproj
-from pyproj import Proj
-
-import folium
-from folium import plugins
-import geoviews as gv
-import hvplot.xarray
-import holoviews as hv
-hv.extension('bokeh')
-gv.extension('bokeh', 'matplotlib')
-
-import sys
-sys.path.append('../../')
-from src.dswx_utils import intersection_percent, colorize, getbasemaps, transform_data_for_folium
-
-import warnings
-warnings.filterwarnings('ignore')
-```
-
-
-
-### **1.2 Setear el Ambiente de Trabajo**
-
-
-
-El código siguiente proporciona instrucciones para establecer el entorno de trabajo. Primero, se obtiene la ruta del directorio de trabajo actual y luego se define como directorio de trabajo.
-
-
-```python id="zS09y7Ff0CSH"
-inDir = os.getcwd()
-```
-
-
-
-
-### **1.3 Generar token de autenticación**
-
-
-
-
-Este código te ayuda a acceder a datos de la NASA de forma segura:
-
-* Revisa si ya guardaste tus datos de acceso: Si encuentra tu usuario y contraseña guardados, los usa automáticamente.
-* Si es necesario, pide tu usuario y contraseña: Si no encuentra tus datos, te pide ingresarlos para guardarlos de forma segura para la próxima vez.
-* Genera un token de autenticación: Este token permite que el código acceda a los datos de la NASA sin que tengas que ingresar tu usuario y contraseña cada vez.
-
-
-```python id="UXe44Ncb0CSI"
-# Generates authentication token
-# Asks for your Earthdata username and password for first time, if netrc does not exists in your home directory.
-
-urs = 'urs.earthdata.nasa.gov' # Earthdata URL endpoint for authentication
-prompts = ['Enter NASA Earthdata Login Username: ',
- 'Enter NASA Earthdata Login Password: ']
-
-# Determine the OS (Windows machines usually use an '_netrc' file)
-netrc_name = "_netrc" if system()=="Windows" else ".netrc"
-
-# Determine if netrc file exists, and if so, if it includes NASA Earthdata Login Credentials
-try:
- netrcDir = os.path.expanduser(f"~/{netrc_name}")
- netrc(netrcDir).authenticators(urs)[0]
-
-```
-
-
-El siguiente código:
-* Prepara el acceso a los datos en la nube: Guarda la información necesaria para conectarse a los datos de PODAAC en un archivo llamado "cookies.txt".
-* Evita errores al buscar archivos: Le indica a la herramienta GDAL que no pierda tiempo buscando archivos en carpetas vacías.
-* Se enfoca en los archivos que necesitas: Le dice a GDAL que solo trabaje con archivos que tengan la extensión TIF o TIFF.
-
-
-
-
-```python id="LkOCFmpm0CSJ"
-# GDAL configurations used to successfully access PODAAC Cloud Assets via vsicurl
-gdal.SetConfigOption('GDAL_HTTP_COOKIEFILE','~/cookies.txt')
-gdal.SetConfigOption('GDAL_HTTP_COOKIEJAR', '~/cookies.txt')
-gdal.SetConfigOption('GDAL_DISABLE_READDIR_ON_OPEN','EMPTY_DIR')
-gdal.SetConfigOption('CPL_VSIL_CURL_ALLOWED_EXTENSIONS','TIF, TIFF')
-```
-
-
-# **Parte 2: Selección del Área de interés (AOI)**
-
-
-
-
-## **2. API CMR-STAC: Búsqueda de datos basada en consultas espaciales**
-
-
-
-
-
-### **2.1 Inicializar parámetros definidos por el usuario **
-
-
-
-* Define la zona de búsqueda: Dibuja un rectángulo en el mapa para indicar el área donde quieres buscar datos.
-* Establece el periodo de búsqueda: Marca la fecha de inicio y de fin para acotar los resultados a un rango de tiempo específico.
-* Muestra los parámetros: Imprime en la pantalla los detalles de la zona de búsqueda y las fechas elegidas para que puedas verificarlos.
-
-
-```python id="bDyra8KD0CSK" outputId="2ca06a37-54b3-4ade-cc6b-761efe056c5f"
-# USER-DEFINED PARAMETERS
-aoi = box(67.4, 26.2, 68.0, 27.5)
-start_date = datetime(2022, 1, 1) # in 2022-01-01 00:00:00 format
-stop_date = f"{datetime.today().strftime('%Y-%m-%d')} 23:59:59" # in 2022-01-01 00:00:00 format
-overlap_threshold = 10 # in percent
-#cloud_cover_threshold = 20 # in percent
-
-print(f"Search between {start_date} and {stop_date}")
-print(f"With AOI: {aoi.__geo_interface__}")
-```
-
-
-Este código busca datos específicos en la NASA:
-
-* Se conecta a la base de datos: Enlaza con la API CMR-STAC de la NASA para poder acceder a sus archivos.
-* Especifica la colección: Indica que quiere buscar datos de la colección "OPERA_L3_DSWX-HLS_PROVISIONAL_V0".
-* Realiza la búsqueda: Filtra los resultados según la zona de búsqueda, las fechas y un límite máximo de 1000 resultados.
-
-
-```python id="ixhkdWNE0CSK"
-# Search data through CMR-STAC API
-stac = 'https://cmr.earthdata.nasa.gov/cloudstac/' # CMR-STAC API Endpoint
-api = Client.open(f'{stac}/POCLOUD/')
-collections = ['OPERA_L3_DSWX-HLS_PROVISIONAL_V0']
-
-search_params = {"collections": collections,
- "intersects": aoi.__geo_interface__,
- "datetime": [start_date, stop_date],
- "max_items": 1000}
-search_dswx = api.search(**search_params)
-```
-
-
-
-### **2.2 Búsqueda de imágenes (de la colección DSWx-HLS) que coincidan con el área de interés**
-
-
-
-El siguiente código:
-
-* Mide la superposición: Calcula cuánto se solapa cada imagen con el área que te interesa.
-* Muestra los porcentajes: Imprime en la pantalla estos porcentajes para que puedas ver la cobertura.
-* Filtra las imágenes: Selecciona solo aquellas que tengan una superposición mayor a un límite establecido.
-
-
-```python id="H9s93OUm0CSL" outputId="8c38f171-0034-42d8-a5c8-638d00827034"
-# Filter datasets based on spatial overlap
-intersects_geometry = aoi.__geo_interface__
-
-#Check percent overlap values
-print("Percent overlap before filtering: ")
-print([f"{intersection_percent(i, intersects_geometry):.2f}" for i in search_dswx.items()])
-
-# Apply spatial overlap and cloud cover filter # utilizamos la variable overloap definida anteriormente
-dswx_filtered = (
- i for i in search_dswx.items() if (intersection_percent(i, intersects_geometry) > overlap_threshold)
-)
-```
-
-
-# **Parte 3: Búsqueda y obtención de datos**
-
-
-
-
-El siguiente código:
-
-1. Transforma los resultados filtrados en una lista para poder trabajar con ellos más fácilmente.
-2. Muestra los detalles del primer resultado para ver cómo es la información que contiene.
-
-
-```python id="3BL5APaz0CSL" outputId="cdd13930-819a-4f1a-a087-bf182dc42c0f"
-# Inspect the items inside the filtered query
-dswx_data = list(dswx_filtered)
-# Inspect one data
-dswx_data[0].to_dict()
-```
-
-
-**A continuación, se muestra un resumen de los resultados de la búsqueda:**
-
-* Cuenta los resultados: Te dice cuántos archivos encontró después de aplicar los filtros.
-* Muestra la superposición: Te indica cuánto se solapa cada archivo con la zona que buscas, para que sepas qué tan bien cubren el área.
-* Indica la nubosidad: Te informa la cantidad de nubes que había en cada archivo antes de filtrarlo, para que puedas considerar si la cobertura de nubes es un factor importante para ti.
-
-
-
-
-```python id="gGPdYpKc0CSM"
-## Print search information
-# Tota granules
-print(f"Total granules after search filter: {len(dswx_data)}")
-
-#Check percent overlap values
-print("Percent-overlap: ")
-print([f"{intersection_percent(i, intersects_geometry):.2f}" for i in dswx_data])
-
-# Check percent cloud cover values
-print("\nPercent cloud cover before filtering: ")
-print([f"{i.properties['eo:cloud_cover']}" for i in search_dswx.items()])
-```
-
-
-# **Actividad práctica A**: Exploración y obtención del área de interés.
-
-
-
-# **Parte 4: Análisis de los datos obtenidos**
-
-Análisis de Series de tiempo con los datos de la zonade interés.
-
-
-
-# **Actividad práctica B: Series de tiempo aplicada al área de interés.**
-
-
-
-# **Parte 5: Procesamiento y visualización de los datos**
-
-
-
-**Crea un mapa para que puedas ver cómo encajan los datos con tu zona de interés:**
-
-* Dibuja los límites de los archivos: Traza los bordes de cada archivo encontrado en color azul para que puedas ver su forma y ubicación.
-* Coloca un mapa de fondo: Agrega un mapa base de la zona para que tengas una referencia visual del terreno.
-* Marca tu área de interés: Dibuja un rectángulo amarillo alrededor de la zona que especificaste en la búsqueda para que puedas ver cómo se superponen los archivos con ella.
-* Muestra el mapa: Te presenta el mapa resultante para que puedas analizar la cobertura de los datos y la coincidencia con tu zona de interés.
-
-
-```python id="8hDY5sJD0CSM"
-# Visualize the DSWx tile boundary and the user-defined bbox
-geom_df = []
-for d,_ in enumerate(dswx_data):
- geom_df.append(shape(dswx_data[d].geometry))
-
-geom_granules = gpd.GeoDataFrame({'geometry':geom_df})
-granules_poly = gv.Polygons(geom_granules, label='DSWx tile boundary').opts(line_color='blue', color=None, show_legend=True)
-
-# Use geoviews to combine a basemap with the shapely polygon of our Region of Interest (ROI)
-base = gv.tile_sources.EsriImagery.opts(width=1000, height=1000)
-
-# Get the user-specified aoi
-geom_aoi = shape(intersects_geometry)
-aoi_poly = gv.Polygons(geom_aoi, label='User-specified bbox').opts(line_color='yellow', color=None, show_legend=True)
-
-# Plot using geoviews wrapper
-granules_poly*base*aoi_poly
-```
-
-
-
-
-### **5.1 Presentar los resultados de la búsqueda en una tabla **
-
-
-
-Crea una tabla para que puedas revisar los resultados de forma organizada:
-
-* Recorre los resultados: Lee cada uno de los archivos encontrados y extrae la información más relevante.
-* Organiza los datos: Coloca la información en columnas para que sea fácil de leer y comparar:
-ID del archivo
-Sensor que lo capturó
-Fecha de la captura
-Coordenadas del archivo
-Límites del área que cubre
-Porcentaje de superposición con tu área de interés
-Cobertura de nubes en el archivo
-Enlaces para descargar las bandas del archivo
-* Muestra la tabla: Te presenta la tabla completa para que puedas analizar los resultados y seleccionar los archivos que mejor se adapten a tus necesidades.
-
-
-```python id="uYhO2Ycs0CSM"
-# Create table of search results
-dswx_data_df = []
-for item in dswx_data:
- item.to_dict()
- fn = item.id.split('_')
- ID = fn[3]
- sensor = fn[6]
- dat = item.datetime.strftime('%Y-%m-%d')
- spatial_overlap = intersection_percent(item, intersects_geometry)
- geom = item.geometry
- bbox = item.bbox
-
- # Take all the band href information
- band_links = [item.assets[links].href for links in item.assets.keys()]
- dswx_data_df.append([ID,sensor,dat,geom,bbox,spatial_overlap,cloud_cover,band_links])
-
-dswx_data_df = pd.DataFrame(dswx_data_df, columns = ['TileID', 'Sensor', 'Date', 'Coords', 'bbox', 'SpatialOverlap', 'CloudCover', 'BandLinks'])
-dswx_data_df
-```
-
-
-
-## **5.2. Carga y visualización de la extensión de la inundación**
-
-
-
-Antes de avanzar, repasemos los pasos necesarios para visualizar la extensión de inundación:
-
-1. **Obtención de los datos de inundación:**
-
- En este ejemplo se utilizan imágenes satelitales del sitio de la NASA.
-En cuanto al formato de datos, existen diferentes tipos como ráster, shapefile, KML, o GeoJSON. En nuestro caso, trabajamos con ráster.
-
-2. **Carga de datos:**
-
- Utilizamos las bibliotecas como GDAL o rasterio en Python para cargar los datos de inundación.
-
-3. **Visualización la extensión de inundación:**
-
- Simbología: Es necesario, asignar un color y transparencia apropiados a las áreas inundadas para diferenciarlas del terreno seco. Puedes usar una escala de colores para representar la profundidad de la inundación.
-
- Superposición: Superpone la extensión de inundación sobre un mapa base, como una imagen aérea o un mapa topográfico, para proporcionar contexto geográfico.
-También será de utilidad añadir leyendas, escalas, rótulos y otros elementos gráficos para mejorar la claridad y la interpretación del mapa.
-
- Ejemplo de visualización:
-
- [Imagen de un mapa que muestra la extensión de inundación en una ciudad. Las áreas inundadas están representadas en azul claro, con mayor transparencia para las zonas de menor profundidad. El mapa base es una imagen aérea y se incluyen una leyenda, una escala y rótulos.][Agregar imagen]
-
-
-
-
-### **5.2.1 Cargar B02-BWTR (Capa binaria de agua) y B03-CONF (Capa de confianza)**
-
-
-
-Para cargar estas capas, necesitarás un programa compatible con el formato de los datos y las herramientas específicas de carga. Utilizamos GDAL y Rasterio para el procesamiento de imágenes comunes que te permitirán cargar estas capas.
-
-
-
-```python id="dCH6j5wp0CSN"
-# Take one of the flooded dataset and check what files are included.
-dswx_data_df.iloc[43].BandLinks
-```
-
-
-
-### **5.2.2 "Fusionar mosaicos".**
-
-
-
-"Fusionar mosaicos". Esta frase se refiere a la acción de combinar dos o más mosaicos de datos para crear un mosaico más grande.
-
-En el contexto de imágenes satelitales, los mosaicos son imágenes individuales que se superponen para cubrir un área más grande. La fusión de mosaicos se utiliza a menudo para crear imágenes de mayor resolución espacial o temporal que las que se pueden obtener a partir de un solo mosaico.
-
-Para fusionar mosaicos, se necesitan dos o más mosaicos que tengan el mismo formato y resolución. Los mosaicos se pueden fusionar utilizando un software de SIG o una biblioteca de procesamiento de imágenes.
-
-
-
-El código está preparando imágenes para mostrarlas en un mapa interactivo.Imagina que tienes un rompecabezas que quieres armar en un mapa. Entonces, el código hace lo siguiente:
-
-* Busca las piezas correctas: Encuentra las imágenes del 30 de septiembre que muestran dónde hay agua (capa B02-BWTR) y qué tan confiable es esa información (capa B03-CONF).
-
-* Prepara las piezas: Adapta las imágenes para que encajen en el mapa (como si estuvieras recortando los bordes del rompecabezas para que calcen bien).
-Las coloca en la posición correcta sobre el mapa (como ir poniendo las piezas en su lugar).
-
-* Une las piezas: Combina las imágenes separadas para crear una vista completa, como si unieras las piezas del rompecabezas.
-
-* Revisa una pieza: Mira de cerca una de las imágenes para asegurarse de que todo esté bien (como revisar una pieza del rompecabezas antes de seguir armando).
-
-
-```python id="V4mhzgDZ0CSN"
-# Get B02-BWTR layer for tiles acquired on 2022-09-30, project to folium's projection and merge tiles
-T42RUR_B02, T42RUR_B02_cm = transform_data_for_folium(dswx_data_df.iloc[42].BandLinks[1])
-T42RUQ_B02, T42RUQ_B02_cm = transform_data_for_folium(dswx_data_df.iloc[43].BandLinks[1])
-merged_B02 = merge_arrays([T42RUR_B02, T42RUQ_B02])
-
-# Get B03-CONF layer for tiles acquired on 2022-09-30, project to folium's projection and merge tiles
-T42RUR_B03, T42RUR_B03_cm = transform_data_for_folium(dswx_data_df.iloc[42].BandLinks[2])
-T42RUQ_B03, T42RUQ_B03_cm = transform_data_for_folium(dswx_data_df.iloc[43].BandLinks[2])
-merged_B03 = merge_arrays([T42RUR_B03, T42RUQ_B03])
-
-# Check one of the DataArrays
-merged_B02
-```
-
-
-
-
-### **5.3 Visualiza las imágenes en un mapa interactivo**
-
-
-
- El código está prepara los colores para que las imágenes sean claras y fáciles de interpretar. Podes imagianr que tenes un libro para colorear con áreas numeradas, y cada número corresponde a un color específico. El código está haciendo lo siguiente:
-
- - Toma los dibujos en blanco y negro: Busca las imágenes que ya preparó (las que muestran el agua y la confianza).
- - Encuentra la paleta de colores adecuada: Tiene la lista de colores que se deben usar para cada área de las imágenes (como la lista de colores del libro para colorear).
- - Pinta cuidadosamente cada área: Asigna los colores correctos a cada zona de las imágenes, siguiendo la paleta elegida.
- - Guarda los dibujos coloreados: Almacena las imágenes ya coloreadas para usarlas en el mapa.
-
-
-
-```python id="RLcoFzy10CSO"
-# Colorize the map using predefined colors from DSWx for Folium display
-colored_B02,cmap_B02 = colorize(merged_B02[0], cmap=T42RUR_B02_cm)
-colored_B03,cmap_B03 = colorize(merged_B03[0], cmap=T42RUR_B03_cm)
-```
-
-
-
-Imagina que estás creando un mapa interactivo con capas superpuestas, el código está haciendo lo siguiente:
-
-* Extiende el lienzo: Despliega un mapa base sobre la mesa, como un mapamundi,
-para tener un fondo sobre el que trabajar.
-
-* Agrega mapas extra: Coloca varios mapas transparentes encima del mapa base, como si fueran hojas de acetato. Así, podrás elegir la vista que más te guste.
-
-* Dibuja las áreas inundadas: En una de las hojas transparentes, pinta con colores llamativos las zonas donde hay agua, para que se vean claramente.
-
-* Marca la confianza en la información: En otra hoja transparente, dibuja con colores más tenues la confianza que se tiene en los datos de las áreas inundadas, como si fuera una pista extra para los más aventureros.
-
-* Añade herramientas útiles: Coloca sobre la mesa una lupa para ampliar zonas, un botón para ver el mapa en pantalla completa y una mini versión del mapa en una esquina, como si fuera una brújula.
-
-* Muestra las coordenadas: Activa un indicador que te dice en qué lugar del mapa estás apuntando en cada momento, como si fuera una guía de coordenadas.
-
-
-```python id="AP5r2kHN0CSQ"
-# Initialize Folium basemap
-xmid =(merged_B02.x.values.min()+merged_B02.x.values.max())/2 ; ymid = (merged_B02.y.values.min()+merged_B02.y.values.max())/2
-m = folium.Map(location=[ymid, xmid], zoom_start=9, tiles='CartoDB positron', show=True)
-
-# Add custom basemaps
-basemaps = getbasemaps()
-for basemap in basemaps:
- basemaps[basemap].add_to(m)
-
-# Overlay B02 and B03 layers
-folium.raster_layers.ImageOverlay(colored_B02,
- opacity=0.6,
- bounds=[[merged_B02.y.values.min(),merged_B02.x.values.min()],[merged_B02.y.values.max(),merged_B02.x.values.max()]],
- name='Flooded Area',
- show=True).add_to(m)
-
-folium.raster_layers.ImageOverlay(colored_B03,
- opacity=0.8,
- bounds=[[merged_B03.y.values.min(),merged_B03.x.values.min()],[merged_B03.y.values.max(),merged_B03.x.values.max()]],
- name='Confidence Layer',
- show=False).add_to(m)
-
-#layer Control
-m.add_child(folium.LayerControl())
-
-# Add fullscreen button
-plugins.Fullscreen().add_to(m)
-
-#Add inset minimap image
-minimap = plugins.MiniMap(width=300, height=300)
-m.add_child(minimap)
-
-#Mouse Position
-fmtr = "function(num) {return L.Util.formatNum(num, 3) + ' º ';};"
-plugins.MousePosition(position='bottomright', separator=' | ', prefix="Lat/Lon:",
- lat_formatter=fmtr, lng_formatter=fmtr).add_to(m)
-
-#Display
-m
-```
-
-
-# **Actividad práctica C: Visualización de la extensión de agua en el área seleccionada**
-
-
-```python id="hdF6iHO5Ljzz"
-
-```
diff --git a/book/09_Scratch/proposal.md b/book/09_Scratch/proposal.md
deleted file mode 100644
index d4e9c9d..0000000
--- a/book/09_Scratch/proposal.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# Reproducibly Analyzing Wildfire, Drought, and Flood Risk with NASA Earthdata Cloud
-
-:::{seealso}
-This text is excerpted from the original project proposal:
-
-Munroe, James, Palopoli, Nicolas, & Acion, Laura. (2023). Reproducibly Analyzing Wildfire, Drought, and Flood Risk with NASA Earthdata Cloud. Zenodo. https://doi.org/10.5281/zenodo.8212073
-:::
-
-
-## Project summary
-As the climate changes, prediction and management of the risk of wildfire, drought, and floods has become increasingly challenging. It is no longer sufficient to assume that what has been normal and historic for the last century will occur with the same frequency into the future. These natural risks are intrinsically linked to the changing distributions of surface water, precipitation, vegetation, and land use in both time and space. At the same time, there are now hundreds of petabytes of relevant Earth science data available through the NASA Earthdata Cloud that can be used to understand and forecast these water-dependent environmental risks. With the volume of Earth science data growing dramatically year over year, it is important for scientists to understand how open science and cloud-based data intensive computing can be used to reproducibly analyze and assess the changing risk profile of wildfire, drought, and floods.
-
-In this proposed TOPS ScienceCore module, learners will learn to identify, extract, analyze, visualize, and report on data available through NASA Earthdata Cloud for three different scenarios: wildfire, drought, and flood risk. The module will build upon TOPS OpenCore and reinforce principles of reproducibility and open science-based workflows. Computationally, the scenarios will estimate changes in the hydrological water mass balance for defined regions primarily using remote sensing data. We will demonstrate best practices in “data-proximate computing” by considering examples that involve computing climatologies and other statistics from long-time series using numerical methods that scale well with the data being available on the cloud. This module will leverage scientific Python libraries such as Xarray and Dask to perform the computations. The focus of this module will be on data processing and visualization and doing so in a reproducible and transparent way.
-
-After completing this module, learners will be able to adapt and remix the scenarios for their own open science objectives regarding environmental risks such as wildfire, drought, and flood. These risks are common worldwide yet need to be each analyzed in a regional context. The module will provide concrete examples that showcase how open science can be done.
-
-The module will be written as an extension to the OpenCore framework and all course materials will be open, available in English and Spanish, and accessible in the vision, hearing, mobility, and attention dimensions. The ScienceCore module will be released as one or more Jupyter notebooks on GitHub with supporting material for delivering the course using the cloud either for in-person or for virtual cohorts.
-
-
-## Scientific/Technical Management
-
-### Introduction
-With a changing climate, wildfires, droughts, and floods continue to be significant risks across the United States and the rest of the world {cite:p}`seneviratne_weather_2021,hicke_north_2022`. Events that used to occur only once a century are now occurring every few years. Historical norms for frequency of extreme climate leading to episodic disaster are not sufficient to infer the frequency into the future.
-
-Flooding is the most significant environmental disaster affecting more than two billion people a year. Floods cause significant damage to infrastructure, displace people, and lead to disease. The frequency of flooding has increased in recent years due to changes in rainfall and land use.
-
-The data from the [National Interagency Fire Center](https://www.nifc.gov/) shows significant growth in the size of wildfires in the US over the last 25 years. In some years, over 10 million acres in the US have been burned by wildfires. Canada has also experienced significant recent wildfires resulting in loss of property, livestock, and industry.
-
-Floods and wildfires are intrinsically linked to underlying water conditions: too much or too little water. Droughts, although on a longer timescale, are also caused by too little water being retained in the environment. Droughts, wildfires, and floods can all occur over the course of a few short months in the same region. The key factor is the abundance, or absence, of precipitation (rain and snowfall) either in short-term events or long-term shifts of how that water is retained by the land.
-
-Floods, wildfires, and droughts are at risk to increase in frequency and intensity due to fundamental shifts in extreme precipitation caused by climate change. Fundamentally, these three environmental risks are about water no longer having the distribution that they have had in the past. We can recognize that our world is changing and causing risk. How can we mitigate that risk?
-
-[NASA Earthdata Cloud](https://www.earthdata.nasa.gov/eosdis/cloud-evolution) has moved petabytes of data into the cloud and that data is ideally suited to answering questions about climate change risk. However, practitioners don’t yet have the proper skills and training to take advantage of these amazing resources. It is no longer sufficient to search for a dataset and then download it locally. The sheer size of the available data makes this not just impractical but, in many cases, impossible. But how do scientists attempting risk assessment complete their work when it is so difficult to download the needed data to a computer to allow running their analysis code locally? The answer is to instead perform data-proximate computing by pushing the analysis code to a computer that is very close (in terms of both costs and latency using the Internet) to where the data is hosted. For data that is hosted in the commercial cloud (such as NASA Earthdata), this means running data analysis on computers in that same cloud environment.
-
-This ScienceCore module, building on top of the OpenCore modules, will teach learners how to access the NASA Earthdata Cloud and produce dashboard-based visualizations and analyses of water sensing data. This data can then be compared with model outputs to forecasts, the new ‘normals’ for wildfire, drought, and flood risk across the world.
-This risk assessment is highly localized and needs to be repeated for every country, state, city, and village. Demonstrating how to leverage NASA Earthdata cloud data in an open, reproducible way will aid thousands of scientists and analysts to produce the reports they need for their own communities.
-
-There is currently a barrier for scientists to use NASA Earthdata Cloud. They do not have the skills and expertise to analyze data at scale in the cloud. This ScienceCore module will teach that skill.
-
-### Objectives and Expected Significance
-Teaching a large number of users about reproducibly analyzing earth data will accelerate our ability to mitigate and adapt to climate change. So a ‘science objective’ is determining if this ScienceCore module will actually help with those adaptation efforts.
-
-This new ScienceCore module will solve the need to apply earth sensing data to climate risk assessment. More generally, it will serve as a template for future ScienceCore modules in this domain. As part of this work, we will not only develop the ScienceCore module but measure its effectiveness in meeting its learning objectives.
-
-Open science is important for climate change because it helps to ensure that the research and data related to climate change is accessible to everyone. This means that anyone, regardless of their expertise or background, can review and verify the research, which helps to build trust in the findings. In addition, open science promotes collaboration and sharing of ideas among researchers, which can lead to more rapid progress in understanding and addressing climate change. By making research open and transparent, we can ensure that the best science is being used to inform decisions about how to address climate change.
-
-### Impact of proposed work to state of the art
-Climate risk needs to be reevaluated at the national, state, and municipality levels. Companies and non-governmental organizations also need to assess climate risk. NASA data contained in the Earthdata cloud is highly relevant to answering the question of assigning climate risk. This ScienceCore module will provide a template for accessing the data and then analyzing it reproducibly, in a consistent and open manner that exemplifies the best practices identified in the TOPS OpenCore content.
-
-The intended audience for this ScienceCore module is people tasked with producing climate risk assessments. There is potential for taking the tooling and data that we collectively now have available from these global models and downscaling that information to every country, region, state, city, town, and village on anticipated climate impacts. Policy and planners at all levels are beginning to tackle the questions of taking these large, global scale data products and figuring out what they mean for their locality. The community is thinking about how the physical climate variables (e.g. temperature and precipitation) affect risks such as flooding, wildfires, droughts, sea level rise, food sustainability, among many others.
-
-JupyterHub infrastructure will provide a platform to deliver this module. This infrastructure can be deployed by any team with intermediate DevOps and Kubernetes knowledge. Using the open infrastructure allows the content to be run by any organization.
-
-### Relevance of proposed work to announcement
-This ScienceCore module is focused on helping people access and analyze data from NASA, including data that is stored in the cloud. This includes using open-source tools and libraries to analyze and visualize the data, and to create and share reproducible research workflows. The project also includes modules that build on existing OpenCore concepts, and that cover important topics in different scientific disciplines.
-
-These modules can be accessed through the TOPS Open edX platform or as Jupyter Books on the TOPS GitHub. The goal of ScienceCore is to make it easy for people to work with NASA data, and to collaborate and share their research with others. The project is designed to be accessible, open, collaborative, multilingual, and interactive. All of the final products created through ScienceCore will be openly licensed and shared on the TOPS GitHub, and proposals for funded projects must include plans for collaboration and participation in annual coordination meetings.
-
-### Technical approach and methodology
-To create this OpenCore module we will bring together **pedagogical experts**, with training in open and inclusive content design, alongside **scientific content specialists** to design course material that follows best-practices in open science, data analysis, and domain methodologies.
-
-This module will contain three fully worked examples of reproducible analysis of an environmental risk. During course delivery, an instructor may choose to only go through one of these worked examples and leave the others for reference. Each example will be independent.
-
-The largest risk is that the technology for accessing cloud enabled data is rapidly changing. It is possible that the specific code examples developed in this ScienceCore module will be obsolete or otherwise out of date in the near term. However, the science content and especially the open science content has a much longer persistence.
-
-Any of the ‘code exercises’ will be written so that they can be replaced in the future without having to fundamentally change the narrative of the module. The science objectives (data discovery, data reduction, visualization, comparison to model output, use in machine learning) will remain even if the libraries we use continue to undergo rapid development.
-
-Another problem is how to scope this module so that participants have the right background for it to be useful. In a relatively short half-day course, learners will already have to have a sufficient background knowledge in data driving computing, programming, visualization as well as risk assessment, remote sensing data, and statistical analysis to be able to directly apply this module. This ScienceCore module will contain links and reference to ‘additional material’ that can help learners to prepare in advance with background material as needed. So the module is taught to those for whom it was designed, the module will provide six learner personas {cite}`wilson_teaching_2020` including the person’s general background, what they already know, what they want to achieve, and any special needs they have.
-
-As well, the module will include pre-assessment tools for assessing learners' knowledge of climate science and risk assessment, proficiency with Jupyter notebooks and Python programming, as well as years of experience related with these areas to control for those piloting the ScienceCore module who may have not yet developed the adequate skills.
-
-The module will be designed using a backward design
-{cite:p}`wiggins_understanding_2005,biggs_teaching_2011,fink_creating_2013`
-incorporating guidelines like those in {cite}`metadocencia_team_hoja_2022`,
- {cite}`via_course_2020`, and {cite:p}`noauthor_collaborative_nodate`. Module backward design entails:
-1. Creating learner personas to determine and clearly communicate what audience the module is designed for.
-1. Writing an initial draft of topics that will be included and not included.
-1. Creating one summative assessment including the contents learners will have to master to obtain this module badge.
-1. Creating all the formative assessments so learners practice contents throughout the module. About one formative assessment per teaching unit, comprising 5 to 9 new concepts and at most after every 15 minutes of content, ensures the module uses an active teaching style and allows both trainers and learners to assess their progress and adapt to the requirements of the occasion.
-1. Ordering formative assessments considering complexity, dependencies, and how a topic motivates learners.
-1. Writing the content that will guide learners between formative assessments.
-1. Generating a succinct module description for promoting it and reaching interested learners.
-
-
-```{bibliography}
-```
diff --git a/book/09_Scratch/references.bib b/book/09_Scratch/references.bib
deleted file mode 100644
index a316d33..0000000
--- a/book/09_Scratch/references.bib
+++ /dev/null
@@ -1,156 +0,0 @@
-
-@techreport{metadocencia_team_hoja_2022,
- title = {Hoja de rota modelo para desarrollo de {Cursos}},
- copyright = {Creative Commons Attribution 4.0 International, Open Access},
- url = {https://zenodo.org/record/7390559},
- doi = {10.5281/ZENODO.7390559},
- abstract = {En este documento se ofrecen pautas, sugerencias y pasos para el diseño de cursos.},
- urldate = {2022-12-08},
- author = {MetaDocencia Team},
- institution = {},
- month = aug,
- year = {2022},
- note = {Publisher: Zenodo},
- keywords = {Ciencia Abierta, clases virtuales, Comunidad, Cursos, docencia, enseñanza, Latin America, pedagogía, Teaching},
-}
-
-@techreport{via_course_2020,
- title = {Course design: {Considerations} for trainers – a {Professional} {Guide}},
- shorttitle = {Course design},
- url = {https://f1000research.com/documents/9-1377},
- doi = {10.7490/F1000RESEARCH.1118395.1},
- urldate = {2022-12-08},
- author = {Via, Allegra and Palagi, Patricia M. and Lindvall, Jessica M. and Tractenberg, Rochelle E. and Attwood, Teresa K. and Foundation, The GOBLET},
- year = {2020},
- note = {Publisher: F1000 Research Limited},
- institution = {},
-
-}
-
-@book{wiggins_understanding_2005,
- address = {Alexandria, VA},
- edition = {Expanded 2nd ed},
- title = {Understanding by design},
- isbn = {978-1-4166-0035-0},
- publisher = {Association for Supervision and Curriculum Development},
- author = {Wiggins, Grant P. and McTighe, Jay},
- year = {2005},
- keywords = {Comprehension, Curriculum planning, Curriculum-based assessment, Learning, United States},
-}
-
-@book{biggs_teaching_2011,
- address = {Maidenhead, England New York, NY},
- edition = {4th edition},
- series = {{SRHE} and {Open} {University} {Press} imprint},
- title = {Teaching for quality learning at university: what the student does},
- isbn = {978-0-335-24275-7},
- shorttitle = {Teaching for quality learning at university},
- language = {eng},
- publisher = {McGraw-Hill, Society for Research into Higher Education \& Open University Press},
- author = {Biggs, John B. and Tang, Catherine So-kum},
- collaborator = {Society for Research into Higher Education},
- year = {2011},
- file = {Table of Contents PDF:C\:\\Users\\jmunr\\Zotero\\storage\\QDQERYVW\\Biggs and Tang - 2011 - Teaching for quality learning at university what .pdf:application/pdf},
-}
-
-@book{fink_creating_2013,
- address = {San Francisco},
- edition = {Revised and updated edition},
- series = {Jossey-{Bass} {Higher} and {Adult} {Education} {Series}},
- title = {Creating significant learning experiences: an integrated approach to designing college courses},
- isbn = {978-1-118-12425-3 978-1-118-41901-4 978-1-118-41632-7},
- shorttitle = {Creating significant learning experiences},
- abstract = {"In this thoroughly updated edition of L. Dee Fink's bestselling classic, he discusses new research on how people learn, active learning, and the effectiveness of his popular model adds more examples from online teaching; and further focuses on the impact of student engagement on student learning. The book explores the changes in higher education nationally and internationally since the publication of the previous edition, includes additional procedures for integrating one's course, and adds strategies for dealing with student resistance to innovative teaching. This edition continues to provide conceptual and procedural tools that are invaluable for all teachers when designing instruction. It shows how to use a taxonomy of significant learning and systematically combine the best research-based practices for learning-centered teaching with a teaching strategy in a way that results in powerful learning experiences for students. Acquiring a deeper understanding of the design process will empower teachers to creatively design courses that will result in significant learning for students"--},
- publisher = {Jossey-Bass},
- author = {Fink, L. Dee},
- year = {2013},
- keywords = {United States, College teaching, Curricula, EDUCATION / Teaching Methods \& Materials / General, Education, Higher},
-}
-
-@misc{noauthor_collaborative_nodate,
- title = {Collaborative {Lesson} {Development} {Training}: {Summary} and {Setup}},
- url = {https://carpentries.github.io/lesson-development-training/},
- urldate = {2022-12-08},
- file = {Collaborative Lesson Development Training\: Summary and Setup:C\:\\Users\\jmunr\\Zotero\\storage\\7QVQ6ZJH\\lesson-development-training.html:text/html},
-}
-
-@misc{earth_science_data_systems_earthdata_2019,
- type = {Program},
- title = {Earthdata {Cloud} {Evolution}},
- url = {http://www.earthdata.nasa.gov/eosdis/cloud-evolution},
- abstract = {Feature article about efforts to move EOSDIS data and services into the commercial cloud, the reasons behind this effort, and the benefits to data users.},
- language = {en},
- urldate = {2022-12-08},
- journal = {Earthdata},
- author = {Earth Science Data Systems, NASA},
- month = may,
- year = {2019},
- note = {Publisher: Earth Science Data Systems, NASA},
- file = {Snapshot:C\:\\Users\\jmunr\\Zotero\\storage\\3A2IVYZ8\\cloud-evolution.html:text/html},
-}
-
-@incollection{hicke_north_2022,
- address = {Cambridge, UK and New York, USA},
- title = {North {America}},
- isbn = {978-1-00-932584-4},
- booktitle = {Climate {Change} 2022: {Impacts}, {Adaptation} and {Vulnerability}. {Contribution} of {Working} {Group} {II} to the {Sixth} {Assessment} {Report} of the {Intergovernmental} {Panel} on {Climate} {Change}},
- publisher = {Cambridge University Press},
- author = {Hicke, J.A. and Lucatello, S. and L.D., Mortsch and Dawson, J. and Aguilar, M. Domínguez and Enquist, C.A.F. and Gilmore, E.A. and Gutzler, D.S. and Harper, S. and Holsman, K. and Jewett, E.B. and Kohler, T.A. and Miller, KA.},
- editor = {Pörtner, H. O. and Roberts, D. C. and Tignor, M. and Poloczanska, E. S. and Mintenbeck, K. and Alegría, A. and Craig, M. and Langsdorf, S. and Löschke, S. and Möller, V. and Okem, A. and Rama, B.},
- year = {2022},
- doi = {10.1017/9781009325844.016.1929},
- note = {Type: Book Section},
- pages = {1929--2042},
- file = {IPCC_AR6_WGII_Chapter14.pdf:C\:\\Users\\jmunr\\Zotero\\storage\\6XXKL7L2\\IPCC_AR6_WGII_Chapter14.pdf:application/pdf},
-}
-
-@incollection{doblas-reyes_linking_2021,
- address = {Cambridge, United Kingdom and New York, NY, USA},
- title = {Linking {Global} to {Regional} {Climate} {Change}},
- booktitle = {Climate {Change} 2021: {The} {Physical} {Science} {Basis}. {Contribution} of {Working} {Group} {I} to the {Sixth} {Assessment} {Report} of the {Intergovernmental} {Panel} on {Climate} {Change}},
- publisher = {Cambridge University Press},
- author = {Doblas-Reyes, F.J. and Sörensson, A.A. and Almazroui, M. and Dosio, A. and Gutowski, W.J. and Haarsma, R. and Hamdi, R. and Hewitson, B. and Kwon, W.-T. and Lamptey, B.L. and Maraun, D. and Stephenson, T.S. and Takayabu, I. and Terray, L. and Turner, A. and Zuo, Z.},
- editor = {Masson-Delmotte, V. and Zhai, P. and Pirani, A. and Connors, S.L. and Péan, C. and Berger, S. and Caud, N. and Chen, Y. and Goldfarb, L. and Gomis, M.I. and Huang, M. and Leitzell, K. and Lonnoy, E. and Matthews, J.B.R. and Maycock, T.K. and Waterfield, T. and Yelekçi, O. and Yu, R. and Zhou, B.},
- year = {2021},
- doi = {10.1017/9781009157896.012},
- note = {Type: Book Section},
- pages = {1363--1512},
-}
-
-@incollection{douville_water_2021,
- address = {Cambridge, United Kingdom and New York, NY, USA},
- title = {Water {Cycle} {Changes}},
- booktitle = {Climate {Change} 2021: {The} {Physical} {Science} {Basis}. {Contribution} of {Working} {Group} {I} to the {Sixth} {Assessment} {Report} of the {Intergovernmental} {Panel} on {Climate} {Change}},
- publisher = {Cambridge University Press},
- author = {Douville, H. and Raghavan, K. and Renwick, J. and Allan, R.P. and Arias, P.A. and Barlow, M. and Cerezo-Mota, R. and Cherchi, A. and Gan, T.Y. and Gergis, J. and Jiang, D. and Khan, A. and Pokam Mba, W. and Rosenfeld, D. and Tierney, J. and Zolina, O.},
- editor = {Masson-Delmotte, V. and Zhai, P. and Pirani, A. and Connors, S.L. and Péan, C. and Berger, S. and Caud, N. and Chen, Y. and Goldfarb, L. and Gomis, M.I. and Huang, M. and Leitzell, K. and Lonnoy, E. and Matthews, J.B.R. and Maycock, T.K. and Waterfield, T. and Yelekçi, O. and Yu, R. and Zhou, B.},
- year = {2021},
- doi = {10.1017/9781009157896.010},
- note = {Type: Book Section},
- pages = {1055--1210},
-}
-
-@incollection{seneviratne_weather_2021,
- address = {Cambridge, United Kingdom and New York, NY, USA},
- title = {Weather and {Climate} {Extreme} {Events} in a {Changing} {Climate}},
- booktitle = {Climate {Change} 2021: {The} {Physical} {Science} {Basis}. {Contribution} of {Working} {Group} {I} to the {Sixth} {Assessment} {Report} of the {Intergovernmental} {Panel} on {Climate} {Change}},
- publisher = {Cambridge University Press},
- author = {Seneviratne, S.I. and Zhang, X. and Adnan, M. and Badi, W. and Dereczynski, C. and Di Luca, A. and Ghosh, S. and Iskandar, I. and Kossin, J. and Lewis, S. and Otto, F. and Pinto, I. and Satoh, M. and Vicente-Serrano, S.M. and Wehner, M. and Zhou, B.},
- editor = {Masson-Delmotte, V. and Zhai, P. and Pirani, A. and Connors, S.L. and Péan, C. and Berger, S. and Caud, N. and Chen, Y. and Goldfarb, L. and Gomis, M.I. and Huang, M. and Leitzell, K. and Lonnoy, E. and Matthews, J.B.R. and Maycock, T.K. and Waterfield, T. and Yelekçi, O. and Yu, R. and Zhou, B.},
- year = {2021},
- doi = {10.1017/9781009157896.013},
- note = {Type: Book Section},
- pages = {1513--1766},
-}
-
-@book{wilson_teaching_2020,
- address = {Boca Raton},
- title = {Teaching tech together: how to make lessons that work and build a teaching community around them},
- isbn = {978-0-429-33070-4 978-1-00-072801-9},
- shorttitle = {Teaching tech together},
- abstract = {"Hundreds of grassroots groups have sprung up around the world to teach programming, web design, robotics, and other skills outside traditional classrooms. These groups exist so that people don't have to learn these things on their own, but ironically, their founders and instructors are often teaching themselves how to teach. There's a better way. This book presents evidence-based practices that will help you create and deliver lessons that work and build a teaching community around them. Topics include the differences between different kinds of learners, diagnosing and correcting misunderstandings, teaching as a performance art, what motivates and demotivates adult learners, how to be a good ally, fostering a healthy community, getting the word out, and building alliances with like-minded groups. The book includes over a hundred exercises that can be done individually or in groups, over 350 references, and a glossary to help you navigate educational jargon"--},
- publisher = {CRC Press},
- author = {Wilson, Greg},
- year = {2020},
- keywords = {Computer programming, Design Study and teaching, Robotics, Study and teaching, Web sites},
-}
diff --git a/book/09_Scratch/slides/Open_Science_Intro_Slides.md b/book/09_Scratch/slides/Open_Science_Intro_Slides.md
deleted file mode 100644
index dca7cc0..0000000
--- a/book/09_Scratch/slides/Open_Science_Intro_Slides.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-jupyter:
- jupytext:
- text_representation:
- extension: .md
- format_name: markdown
- format_version: '1.3'
- jupytext_version: 1.16.2
- kernelspec:
- display_name: base
- language: python
- name: python3
----
-
-## About this tutorial
-* Use data available through the NASA Earthdata Cloud to understand and forecast environmental risks such as wildfire, drought, and floods.
-* Analyze, visualize, and report on data through open science-based workflows and the use of cloud-based data computing.
-
-
-## What is Open Science?
-***The principle and practice of making research products and processes available to all, while respecting diverse cultures, maintaining security and privacy, and fostering collaborations, reproducibility, and equity***
-
-
-
-
-
-### What open science resources are available?
-
-- Over 100 Petabytes of openly available NASA Earthdata.
-- Online platforms for data sharing and collaboration.
-- Publicly accessible code repositories and development tools.
-
-
-### What are the benefits of Open Science?
-
-- Enhances the discoverability and accessibility of scientific processes and outputs.
-- Open methods enhance reproducibility.
-- Transparency and verifiability enhance accuracy.
-- Scrutiny of analytic decisions promotes trust.
-- Accessible data and collective efforts accelerate discoveries.
-- Open science fosters inclusion, diversity, equity, and accessibility (IDEA).
-- And much more..
-
-
-
-
-
-
-
-
-## Where to start? Open Research Products
-
-Scientific knowledge, or research products, take the form of:
-
-#### Data
-
-Scientifically relevant information in digital format.
-- Examples: mission data, calibration details, metadata.
-
-#### Code
-
-Software used in scientific computing.
-- Types: general purpose, libraries, modeling, analysis, single-use.
-
-#### Results
-
-Various research outputs showcasing scientific findings.
-- Examples: publications, notebooks, presentations, blog posts.
-
-
-
-