Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAS-2232 -small functions added to support the main solution in the t… #16

Merged
merged 9 commits into from
Nov 1, 2024
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@
#
harmony-py~=0.4.10
netCDF4~=1.6.4
notebook~=7.0.4
notebook~=7.2.2
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
xarray~=2023.9.0
232 changes: 232 additions & 0 deletions hoss/coordinate_utilities.py
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
""" This module contains utility functions used for
coordinate variables and methods to convert the
coordinate variable data to x/y dimension scales
"""

from typing import Set, Tuple
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

import numpy as np
from netCDF4 import Dataset
from numpy import ndarray
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
from varinfo import VariableFromDmr, VarInfoFromDmr

from hoss.exceptions import (
CannotComputeDimensionResolution,
IrregularCoordinateVariables,
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
MissingCoordinateVariable,
)


def get_override_projected_dimension_names(
varinfo: VarInfoFromDmr, variable_name: str
) -> str:
"""returns the x-y projection variable names that would
match the group of geo coordinate names. The coordinate
variable name gets converted to 'projected_y' dimension scale
and 'projected_x'

"""
override_variable = varinfo.get_variable(variable_name)

if override_variable is not None and (
override_variable.is_latitude() or override_variable.is_longitude()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be ok in context, but as a stand-alone function it is a bit odd - taking in a variable_name, and then only working if that variable is either a latitude or longitude variable. In fact, the result is not really dependent upon the variable_name passed in, other than to use the path. It might be that it has the side effect of verifying the variable_name as a latitude or longitude, but that is because it calls upon the is_latitude and is_longitude VarInfo methods, which can be called directly. It raises the exception MissingCoordinateVariable if the variable passed in is not a latitude or longitude, but is a known variable, i.e., not intrinsically a missing coordinate use case.

It has the feel of a method perhaps taken as frequently used code, in which case the name could be check_coord_var_and_get_std_dimensions. Alternatively, if there is not the need to verify the variable is a coordinate, the code could be simplified to simply return standard dimensions within a shared path - a get_std_projected_dimension_reference. If the only use is in the following method, that would be my recommendation.

As is, the name could also be: get_std_projected_coordinate_dimensions, but I wonder about the need to check for is_latitidue or is_longitude.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will use your new suggested names

I originally had the is_latitude to map to projected_y and is_longitude to map to projected_x
I removed that.
But we do need to validate that are coordinates - we are using that path for the projected dimensions..
Maybe it does not have to be within this method..

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made the updates - removed "override" and renamed the methods. removed the lat/lon check in the "get_projected_dimension_names' function"
commit - ca881d1

):
projected_dimension_names = [
f'{override_variable.group_path}/projected_y',
f'{override_variable.group_path}/projected_x',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This choice of dimension names is better than before, but it's still likely to cause problems down the road. This function currently relies on the assumption that coordinate variables for different grids will be in different groups, which is not guaranteed.

(I think we see this with some of the gridded ICESat-2 ATLAS collections, for example)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I could just have group_path/coordinate_name_projected_x or projected_y
The coordinate names would be unique
it would be latitude_projected_x or latitude_projected_y . It would not matter . it would be unique

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One approach (that would make this function more complicated 😞) could be to:

  • (Already done) - Get the VariableFromDmr instance.
  • Get the references from the coordinates metadata attribute.
  • Find the latitude and longitude variables from those references and put them in a list. - Maybe reusing your get_coordinate_variables function.
  • Make a unique hash of those references.
  • Prepend that hash on to .../projected_x and .../projected_y (like you're already doing with the group path).

I took some inspiration from MaskFill. To try something like:

from hashlib import sha256

# Faking the coordinates list for now:
coordinates_list = ['/latitude', '/longitude']

coordinates_hash = sha256(' '.join(coordinates_list).encode('utf-8')).hexdigest()

projected_dimension_names = [
    f'{coordinates_hash}/projected_y',
    f'{coordinates_hash}/projected_x',
]

The string will look a bit ugly (e.g., 'a089b9ebff6935f6c8332710de2ee3b351bd47c1fb807b22765969617027e8d2'), but it will be unique and reproducible.

]
else:
raise MissingCoordinateVariable(override_variable.full_name_path)
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

return projected_dimension_names


def get_override_projected_dimensions(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is expected to be updated when we do use a configuration file, but in the mean time, perhaps get_std_projected_dimensions is a better name. Also, such a revised name suggests a method that is well defined and ready to merge, vs. getting the projected dimensions, possibly with transposed names, which is TBD.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Used override because we are overriding the coordinates with projected dimensions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we do use a configuration file

I'm going to sound like a broken record, but I do not think we should assume leaning on a configuration file is the best implementation. There are two relevant tenets I try to generally adhere to in our service code:

  1. Avoid hard-coding collection specific information into the code itself.
  2. Minimise the need for configuration file entries, to maintain a low barrier to entry for data providers to on-board collections to a service.

We have the information we need to determine dimension ordering while not needing configuration file-based information. And, if we write the code in a general way, this could open HOSS up to a large number of collections that are HDF-5 format but without proper 1-D dimension variables. That could be a massive win! But it would be stymied by data providers needing to add information to the configuration file. To date, no-one outside of DAS has added any configuration file information to any of our services in the last 3 years. That's pretty telling.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, yup, I acknowledge that this monologue is not strictly relevant to this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels like the better output for this function (definitely out-of-scope for this PR) would be a list with the number of elements matching the number of dimensions of the variable. Maybe the additional, non-spatial, dimensions could have None for their dimensions. Just to write out what that might look like:

Variable Dimensions Output of this function
(time, lat, lon) [None, '/unique_cache_key_one', '/unique_cache_key_two']
(time, lon, lat) [None, '/unique_cache_key_two', '/unique_cache_key_one']
(lat, lon, time) ['/unique_cache_key_one', '/unique_cache_key_two', None]
(lat, lon) ['/unique_cache_key_one', '/unique_cache_key_two']
(lat) ['/unique_cache_key_one']
(lon) ['/unique_cache_key_two']
(time) [None]
(other_dimension) [None]

If this same function was called in add_index_range, that would mean that you'd have a correctly ordered list, and would end up doing something like index_ranges.get(None), and so end up with [] in the right places for dimensions you don't have ranges for. Something to maybe discuss during the next iteration?

varinfo: VarInfoFromDmr,
variable_name: str,
) -> list[str]:
"""
Returns the projected dimensions names from coordinate variables
"""
latitude_coordinates, longitude_coordinates = get_coordinate_variables(
varinfo, [variable_name]
)

override_dimensions = []
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
if latitude_coordinates and longitude_coordinates:
# there should be only 1 lat and lon coordinate for one variable
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
override_dimensions = get_override_projected_dimension_names(
varinfo, latitude_coordinates[0]
)

# if the override is the variable
elif (
varinfo.get_variable(variable_name).is_latitude()
or varinfo.get_variable(variable_name).is_longitude()
):
override_dimensions = get_override_projected_dimension_names(
varinfo, variable_name
)
return override_dimensions


def get_variables_with_anonymous_dims(
varinfo: VarInfoFromDmr, variables: set[str]
) -> set[str]:
"""
returns a set of variables without any
dimensions
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
"""
return set(
variable
for variable in variables
if len(varinfo.get_variable(variable).dimensions) == 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is where we need the additional:

if (len(varinfo.get_variable(variable).dimensions) == 0
    or all([ for dimension in varinfo.get_variable(variable.dimensions) : 
                  varinfo.get_variable(dimension) not None and not [] )

(excuse the pidgeon python)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I will include it as a comment till we make the configuration change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly prefer what we have now. Also, the function contents exactly matches the function name.

Also, the snippet supplied above is not correct Python code, so it's hard to know for sure what you are trying to achieve. Trying to decompose that snippet:

if (len(varinfo.get_variable(variable).dimensions) == 0
    or all([ for dimension in varinfo.get_variable(variable.dimensions) : varinfo.get_variable(dimension) not None and not [] )
  • The first bit still makes sense - if the variable in question doesn't have dimensions.
  • Then, I think you are trying to see the VarInfoFromDmr instance does not have any of the listed dimensions as variables.
  • The and not [] is a no-op. It will always evaluate to True, because it is being evaluated in isolation, and you are asking if an empty list is "falsy", which it is.

While I don't like this approach, I think what you are trying to suggest would be more like:

if (
    len(varinfo.get_variable(variable).dimensions) == 0
    or all(
        varinfo.get_variable(dimension) == None
        for dimension in varinfo.get_variable(variable).dimensions
    )
)

If this was to be augmented in such a way, I would recommend breaking this check out into it's own function, because the set comprehension will become very hard to read.

Copy link

@D-Auty D-Auty Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see splitting out the function to clarify the code and document the comprehension.
I'm less clear on forcing the upstream code to use this code without the additional check, and then add in the call to the new function in every usage. That additional check is now essential to ensure the case of OPeNDAP creating "empty" dimensions does not allow this check, by itself, to succeed. And of course, OPeNDAP's "empty" dimensions is pretty much always going to be the case.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if len(varinfo.get_variable(variable).dimensions) == 0
if (len(varinfo.get_variable(variable).dimensions) == 0
or any_absent_dimension_variables(variable)
...
def any_absent_dimension_variables(variable: VarInfo.variable) => bool
return any(
varinfo.get_variable(dimension) == None
for dimension in varinfo.get_variable(variable).dimensions
)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

function updated and unit tests added - d972777

)


def get_coordinate_variables(
varinfo: VarInfoFromDmr,
requested_variables: Set[str],
) -> tuple[list, list]:
"""This method returns coordinate variables that are referenced
in the variables requested. It returns it in a specific order
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
[latitude, longitude]
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
"""

coordinate_variables_set = sorted(
varinfo.get_references_for_attribute(requested_variables, 'coordinates')
)
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

latitude_coordinate_variables = [
coordinate
for coordinate in coordinate_variables_set
if varinfo.get_variable(coordinate).is_latitude()
]

longitude_coordinate_variables = [
coordinate
for coordinate in coordinate_variables_set
if varinfo.get_variable(coordinate).is_longitude()
]

return latitude_coordinate_variables, longitude_coordinate_variables


def get_row_col_sizes_from_coordinate_datasets(
lat_arr: ndarray,
lon_arr: ndarray,
) -> Tuple[int, int]:
"""
This method returns the row and column sizes of the coordinate datasets

"""
if lat_arr.ndim > 1 and lon_arr.shape == lat_arr.shape:
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
col_size = lat_arr.shape[1]
row_size = lat_arr.shape[0]
elif (
lat_arr.ndim == 1
and lon_arr.ndim == 1
and lat_arr.size > 0
and lon_arr.size > 0
):
col_size = lon_arr.size
row_size = lat_arr.size
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
else:
raise IrregularCoordinateVariables(lon_arr.shape, lat_arr.shape)
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
return row_size, col_size


def get_lat_lon_arrays(
prefetch_dataset: Dataset,
latitude_coordinate: VariableFromDmr,
longitude_coordinate: VariableFromDmr,
) -> Tuple[ndarray, ndarray]:
"""
This method is used to return the lat lon arrays from a 2D
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
coordinate dataset.
"""
try:
lat_arr = prefetch_dataset[latitude_coordinate.full_name_path][:]
except Exception as exception:
raise MissingCoordinateVariable(
latitude_coordinate.full_name_path
) from exception

try:
lon_arr = prefetch_dataset[longitude_coordinate.full_name_path][:]
except Exception as exception:
raise MissingCoordinateVariable(
longitude_coordinate.full_name_path
) from exception

return lat_arr, lon_arr
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved


def get_dimension_scale_from_dimvalues(
dim_values: ndarray, dim_indices: ndarray, dim_size: float
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
) -> ndarray:
"""
return a full dimension scale based on the 2 projected points and
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
grid size
"""
dim_resolution = 0.0
if (dim_indices[1] != dim_indices[0]) and (dim_values[1] != dim_values[0]):
dim_resolution = (dim_values[1] - dim_values[0]) / (
dim_indices[1] - dim_indices[0]
)
if dim_resolution == 0.0:
raise CannotComputeDimensionResolution(dim_values[0], dim_indices[0])
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

# create the dim scale
dim_asc = dim_values[1] > dim_values[0]

if dim_asc:
dim_min = dim_values[0] + (dim_resolution * dim_indices[0])
dim_max = dim_values[0] + (dim_resolution * (dim_size - dim_indices[0] - 1))
dim_data = np.linspace(dim_min, dim_max, dim_size)
else:
dim_max = dim_values[0] + (-dim_resolution * dim_indices[0])
dim_min = dim_values[0] - (-dim_resolution * (dim_size - dim_indices[0] - 1))
dim_data = np.linspace(dim_max, dim_min, dim_size)
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

return dim_data


def get_valid_indices(
coordinate_row_col: ndarray, coordinate_fill: float, coordinate_name: str
) -> ndarray:
"""
Returns indices of a valid array without fill values
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
"""

if coordinate_fill:
Copy link
Member

@owenlittlejohns owenlittlejohns Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I asked this on the other PR - should this check be mutually exclusive to the other checks in the longitude or latitude blocks, or should it be done in addition to those checks? Right now, if you have a fill value, you are only checking for where the coordinate is not fill, and not considering your other checks. (I tend to prefer this first check, but wanted to confirm what the logic was intended to be)

Copy link
Collaborator Author

@sudha-murthy sudha-murthy Oct 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right. if we have a fill value - we use that to check for validity of the data and if that is not available we check for the geo extent range. I guess we can check for it even if the fill value is provided - in case the coordinate data itself is bad data.

if we check the lat/lon valid range first, the check for fill does become redundant..the fill value would definitely be outside that range,...

I guess @autydp can weigh in

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would check the coordinates regardless, and let the fill-value be outside that range. It simplifies the code and those checks need to happen.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated - 51b110c

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reworking this so that the fill value check and the latitude/longitude range checks can both happen.

I think the coordinate.is_latitude() and coordinate.is_longitude() checks could benefit from some numpy magic, rather than looping individually through each element. I think what you could use is the element-wise-and, which can be either written as & or np.logical_and. You could do something like:

if coordinate_fill is not None:
    is_not_fill = ~np.isclose(coordinate_row_col, float(coordinate_fill))
else:
    # Creates an entire array of `True` values.
    is_not_fill = np.ones_like(coordinate_row_col, dtype=bool)

if coordinate.is_longitude():
    valid_indices = np.where(
        np.logical_and(
            is_not_fill,
            np.logical_and(
                coordinate_row_col >= -180.0,
                coordinate_row_col <= 360.0
            )
        )
    )
elif coordinate is latitude():
    valid_indices = np.where(
        np.logical_and(
            is_not_fill,
            np.logical_and(
                coordinate_row_col >= -90.0,
                coordinate_row_col <= 90.0
            )
        )
    )
else:
    valid_indices = np.empty((0, 0))

Note, in the snippet above, I've also changed the first check from if coordinate_fill to if coordinate_fill is not None. That's pretty important, as zero could be a valid fill value, but if coordinate_fill = 0, then this check will evaluate to False.

Ultimately, I think the conditions you have now are equivalent to this, just maybe not as efficient. So the only thing I'd definitely like to see changed is that first if coordinate_fill condition, so that it's not considering of fill value of 0 to be non-fill.

valid_indices = np.where(
~np.isclose(coordinate_row_col, float(coordinate_fill))
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
)[0]
elif coordinate_name == 'longitude':
valid_indices = np.where(
(coordinate_row_col >= -180.0) & (coordinate_row_col <= 180.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some collections aren't normalised to just -180 ≤ longitude (degrees east) ≤ 180, some are 0 ≤ longitude (degrees east) ≤ 360. Maybe this check should be:

(coordinate_row_col >= -180.0) & (coordinate_row_col <= 360.0)

Also, please use and instead of & which is a bitwise-and operator, and not the logical AND operator. (Same in the condition for latitude checks below)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh sorry I knew that. Not sure why I used &. will fix.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think changing to and gave a different error :
Simplify chained comparison between the operands (chained-comparison)

have to change that check to be simpler

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wanted to come back to this thread - I was wrong about the &. But I think I prefer the equivalent np.logical_and just because I'm generally too verbose.

)[0]
elif coordinate_name == 'latitude':
valid_indices = np.where(
(coordinate_row_col >= -90.0) & (coordinate_row_col <= 90.0)
)[0]
else:
valid_indices = np.empty((0, 0))
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

return valid_indices


def get_fill_value_for_coordinate(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this function? It's just a wrapper for VariableFromDmr.get_attribute_value('_FillValue').

The only difference, really, is that the value is always being cast as a float. But some variables aren't floats, so this seems like an iffy thing to do.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not fill value for any variable. the fill value for the coordinates has to be converted to a float right? to compare.
I could remove the function...but If there are cases where the fill value will not be a float, it is could to have this method to handle the different cases

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could remove the method and add it as part of get_valid_indices

Copy link
Member

@owenlittlejohns owenlittlejohns Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python plays a little fast and loose with types sometimes. If you have an integer and you are trying to do something with it and a float, then Python tries to be clever and treat that integer as a float.

For example:

In [1]: integer_variable = 2
In [2]: type(integer_variable)
Out[2]: int
In [3]: integer_variable == 2.0
Out[3]: True

So, I think we're probably okay in the instance that the fill value metadata attribute is an integer. (But I totally acknowledge that there is somewhat sneaky stuff Python is doing in the background here)

coordinate: VariableFromDmr,
) -> float | None:
"""
returns fill values for the variable. If it does not exist
checks for the overrides from the json file. If there is no
overrides, returns None
"""

fill_value = coordinate.get_attribute_value('_FillValue')
if fill_value is not None:
return float(fill_value)
return fill_value
61 changes: 60 additions & 1 deletion hoss/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ class InvalidRequestedRange(CustomError):
def __init__(self):
super().__init__(
'InvalidRequestedRange',
'Input request specified range outside supported ' 'dimension range',
'Input request specified range outside supported dimension range',
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved
)


Expand Down Expand Up @@ -108,6 +108,65 @@ def __init__(self):
)


class MissingCoordinateVariable(CustomError):
"""This exception is raised when HOSS tries to get latitude and longitude
variables and they are missing or empty. These variables are referred to
in the science variables with coordinate attributes.

"""

def __init__(self, referring_variable):
super().__init__(
'MissingCoordinateVariable',
f'Coordinate: "{referring_variable}" is '
'not present in source granule file.',
)


class InvalidCoordinateVariable(CustomError):
"""This exception is raised when HOSS tries to get latitude and longitude
variables and they have fill values to the extent that it cannot be used.
These variables are referred in the science variables with coordinate attributes.

"""

def __init__(self, referring_variable):
super().__init__(
'InvalidCoordinateVariable',
f'Coordinate: "{referring_variable}" is '
'not valid in source granule file.',
)


class IrregularCoordinateVariables(CustomError):
"""This exception is raised when HOSS tries to get latitude and longitude
coordinate variable and they are missing or empty. These variables are referred to
in the science variables with coordinate attributes.
owenlittlejohns marked this conversation as resolved.
Show resolved Hide resolved

"""

def __init__(self, longitude_shape, latitude_shape):
super().__init__(
'IrregularCoordinateVariables',
f'Longitude coordinate shape: "{longitude_shape}"'
f'does not match the latitude coordinate shape: "{latitude_shape}"',
)


class CannotComputeDimensionResolution(CustomError):
"""This exception is raised when the two values passed to
the method computing the resolution are equal

"""

def __init__(self, dim_value, dim_index):
super().__init__(
'CannotComputeDimensionResolution',
'Cannot compute the dimension resolution for '
f'dim_value: "{dim_value}" dim_index: "{dim_index}"',
)


class UnsupportedShapeFileFormat(CustomError):
"""This exception is raised when the shape file included in the input
Harmony message is not GeoJSON.
Expand Down
Loading
Loading