Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Inplement linting with Ruff #674

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 28 additions & 47 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,58 +30,39 @@ repos:
hooks:
- id: absolufy-imports

# Format the code aggressively using black
- repo: https://github.com/psf/black
rev: 24.10.0
# Ruff linter and code formatter
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.8.6
hooks:
- id: black
args: [--line-length=120]
# Run the linter.
- id: ruff
# Run the formatter.
- id: ruff-format

# Lint the code using flake8
- repo: https://github.com/pycqa/flake8
rev: 7.1.1
# Enable lint fixes with ruff
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.8.6
hooks:
- id: flake8
# More than one argument in the second list, so need to pass arguments as below (and -- to finish)
args: [
'--max-line-length', '120', # we can write dicts however we want
'--extend-ignore', 'E203,C408,B028', # flake8 disagrees with black, so this should be ignored.
'--'
]
additional_dependencies:
- flake8-comprehensions
- flake8-bugbear
files: ^(xdem|tests)

# Lint the code using mypy
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.13.0
hooks:
- id: mypy
args: [
--config-file=mypy.ini,
--strict,
--implicit-optional,
--ignore-missing-imports, # Don't warn about stubs since pre-commit runs in a limited env
--allow-untyped-calls, # Dynamic function/method calls are okay. Untyped function definitions are not okay.
--show-error-codes,
--no-warn-unused-ignores, # Ignore 'type: ignore' comments that are not used.
--disable-error-code=attr-defined, # "Module has no attribute 'XXX'" occurs because of the pre-commit env.
--disable-error-code=name-defined, # "Name 'XXX' is not defined" occurs because of the pre-commit env.
--disable-error-code=var-annotated,
--disable-error-code=no-any-return

]
additional_dependencies: [tokenize-rt==3.2.0, numpy==1.26]
files: ^(xdem|tests|doc/code)

# Run the linter.
- id: ruff
args: [ --fix ]
# Run the formatter.
- id: ruff-format

# Sort imports using isort
- repo: https://github.com/PyCQA/isort
rev: 5.13.2
# To run ruff over Jupyter Notebooks
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.8.6
hooks:
- id: isort
args: ["--profile", "black"]
# Run the linter.
- id: ruff
types_or: [ python, pyi, jupyter ]
args: [ --fix ]
# Run the formatter.
- id: ruff-format
types_or: [ python, pyi, jupyter ]

# Automatically upgrade syntax to a minimum version
- repo: https://github.com/asottile/pyupgrade
Expand Down
83 changes: 83 additions & 0 deletions .ruff.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Exclude a variety of commonly ignored directories.
exclude = ["ALL"]

extend-exclude = [
".github",
"binder",
"doc",
"xdem.egg-info",
".coveragerc",
".gitignore",
".pre-commit-config.yaml",
".readthedocs.yaml",
".relint.yml",
]

# Support Python 3.10+.
target-version = "py310"

# Same as Black.
# The formatter wraps lines at a length of 120 when enforcing long-lines violations (like E501).
line-length = 120
# Number of spaces per tabulation, used by the formatter and when enforcing long-line violations.
indent-width = 4

[lint.pycodestyle]
# E501 reports lines that exceed the length of 120.
max-line-length = 120

[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
# McCabe complexity (`C901`) by default (10).
select = ["ALL"]

# Skip To do format annotations rules (FIX002, TD002, TD003)
# Skip pydocstyle rules (D101, D205, D400, D401, D415)
# Skip flake8-simplify rules (SIM102, SIM108, SIM115)
# Skip pygrep-hooks rules (PGH003, PGH004)
# Skip pylint refactor rules (PLR0912, PLR0913, PLR0915, PLR2004)
# Skip the use of assert (S101)
# Skip flake8-type-checking rules (TC001, TC002, TC003)
# Skip tryceratops rules (TRY003, TRY201)
# Skip pyupgrade rule : non-pep604-isinstance (UP038)
# ...
ignore = ["ANN401", "ARG002", "B028", "B904", "BLE001", "C901", "D101", "D205", "D400", "D401", "D415", "EM101",
"EM102", "ERA001", "F541", "FBT001", "FBT002", "FBT003", "FIX002", "INP001", "PD011", "PGH003", "PGH004",
"PLR0912", "PLR0913", "PLR0915", "PLR2004", "PLW0127", "PT011", "PTH118", "PYI041", "PYI051", "RET504",
"S101", "SIM102", "SIM108", "SIM115", "T201", "TC001", "TC002", "TC003", "TD002", "TD003", "TRY003",
"TRY201", "UP038"]

# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []

# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"

[format]
# Like Black, use double quotes for strings.
quote-style = "double"

# Like Black, indent with spaces, rather than tabs.
indent-style = "space"

# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false

# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"

# Enable auto-formatting of code examples in docstrings. Markdown,
# reStructuredText code/literal blocks and doctests are all supported.
#
# This is currently disabled by default, but it is planned for this
# to be opt-out in the future.
docstring-code-format = false

# Set the line length limit used when formatting code snippets in
# docstrings.
#
# This only has an effect when the `docstring-code-format` setting is
# enabled.
docstring-code-line-length = "dynamic"
20 changes: 20 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,26 @@ test: ## run tests
${VENV}/bin/pytest; \
fi

## Code quality, linting section

.PHONY: lint
lint: ruff ## Apply the ruff linter.

.PHONY: lint-check
lint-check: ## Check whether the codebase satisfies the linter rules.
@echo
@echo "Checking linter rules..."
@echo "========================"
@echo
@ruff check $(PATH)

.PHONY: ruff
ruff: ## Apply ruff.
@echo "Applying ruff..."
@echo "================"
@echo
@ruff --fix $(PATH)

## Clean section

.PHONY: clean
Expand Down
18 changes: 18 additions & 0 deletions examples/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright (c) 2024 Centre National d'Etudes Spatiales (CNES).
#
# This file is part of the xDEM project:
# https://github.com/glaciohack/xdem
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""xDEM examples module init file."""
18 changes: 18 additions & 0 deletions examples/advanced/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright (c) 2024 Centre National d'Etudes Spatiales (CNES).
#
# This file is part of the xDEM project:
# https://github.com/glaciohack/xdem
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" """ # noqa
17 changes: 10 additions & 7 deletions examples/advanced/plot_blockwise_coreg.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
"""
Blockwise coregistration
"""Blockwise coregistration
========================

Often, biases are spatially variable, and a "global" shift may not be enough to coregister a DEM properly.
In the :ref:`sphx_glr_basic_examples_plot_nuth_kaab.py` example, we saw that the method improved the alignment significantly, but there were still possibly nonlinear artefacts in the result.
In the :ref:`sphx_glr_basic_examples_plot_nuth_kaab.py` example, we saw that the method improved the alignment
significantly, but there were still possibly nonlinear artefacts in the result.
Clearly, nonlinear coregistration approaches are needed.
One solution is :class:`xdem.coreg.BlockwiseCoreg`, a helper to run any ``Coreg`` class over an arbitrarily small grid, and then "puppet warp" the DEM to fit the reference best.
One solution is :class:`xdem.coreg.BlockwiseCoreg`, a helper to run any ``Coreg`` class over an arbitrarily small grid,
and then "puppet warp" the DEM to fit the reference best.

The ``BlockwiseCoreg`` class runs in five steps:

Expand Down Expand Up @@ -43,7 +44,8 @@
]

# %%
# The DEM to be aligned (a 1990 photogrammetry-derived DEM) has some vertical and horizontal biases that we want to avoid, as well as possible nonlinear distortions.
# The DEM to be aligned (a 1990 photogrammetry-derived DEM) has some vertical and horizontal biases that we want to
# avoid, as well as possible nonlinear distortions.
# The product is a mosaic of multiple DEMs, so "seams" may exist in the data.
# These can be visualized by plotting a change map:

Expand Down Expand Up @@ -75,11 +77,12 @@

# %%
# The estimated shifts can be visualized by applying the coregistration to a completely flat surface.
# This shows the estimated shifts that would be applied in elevation; additional horizontal shifts will also be applied if the method supports it.
# This shows the estimated shifts that would be applied in elevation; additional horizontal shifts will also be applied
# if the method supports it.
# The :func:`xdem.coreg.BlockwiseCoreg.stats` method can be used to annotate each block with its associated Z shift.

z_correction = blockwise.apply(
np.zeros_like(dem_to_be_aligned.data), transform=dem_to_be_aligned.transform, crs=dem_to_be_aligned.crs
np.zeros_like(dem_to_be_aligned.data), transform=dem_to_be_aligned.transform, crs=dem_to_be_aligned.crs,
)[0]
plt.title("Vertical correction")
plt.imshow(z_correction, cmap="RdYlBu", vmin=-10, vmax=10, extent=plt_extent)
Expand Down
31 changes: 20 additions & 11 deletions examples/advanced/plot_demcollection.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
"""
Working with a collection of DEMs
"""Working with a collection of DEMs
=================================

.. caution:: This functionality might be removed in future package versions.

Oftentimes, more than two timestamps (DEMs) are analyzed simultaneously.
One single dDEM only captures one interval, so multiple dDEMs have to be created.
In addition, if multiple masking polygons exist (e.g. glacier outlines from multiple years), these should be accounted for properly.
The :class:`xdem.DEMCollection` is a tool to properly work with multiple timestamps at the same time, and makes calculations of elevation/volume change over multiple years easy.
In addition, if multiple masking polygons exist (e.g. glacier outlines from multiple years),
these should be accounted for properly.
The :class:`xdem.DEMCollection` is a tool to properly work with multiple timestamps at the same time, and makes
calculations of elevation/volume change over multiple years easy.
"""

from datetime import datetime
from datetime import datetime, timezone

import geoutils as gu
import matplotlib.pyplot as plt
Expand All @@ -32,8 +33,12 @@
# These parts can be delineated with masks or polygons.
# Here, we have glacier outlines from 1990 and 2009.
outlines = {
datetime(1990, 8, 1): gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines")),
datetime(2009, 8, 1): gu.Vector(xdem.examples.get_path("longyearbyen_glacier_outlines_2010")),
datetime(1990, 8, 1, tzinfo=timezone.utc): gu.Vector(
xdem.examples.get_path("longyearbyen_glacier_outlines"),
),
datetime(2009, 8, 1, tzinfo=timezone.utc): gu.Vector(
xdem.examples.get_path("longyearbyen_glacier_outlines_2010"),
),
}

# %%
Expand All @@ -42,7 +47,9 @@
# Fake a 2060 DEM by assuming twice the change from 1990-2009 between 2009 and 2060
dem_2060 = dem_2009 + (dem_2009 - dem_1990).data * 3

timestamps = [datetime(1990, 8, 1), datetime(2009, 8, 1), datetime(2060, 8, 1)]
timestamps = [datetime(1990, 8, 1, tzinfo=timezone.utc),
datetime(2009, 8, 1, tzinfo=timezone.utc),
datetime(2060, 8, 1, tzinfo=timezone.utc)]

# %%
# Now, all data are ready to be collected in an :class:`xdem.DEMCollection` object.
Expand All @@ -52,7 +59,7 @@
#

demcollection = xdem.DEMCollection(
dems=[dem_1990, dem_2009, dem_2060], timestamps=timestamps, outlines=outlines, reference_dem=1
dems=[dem_1990, dem_2009, dem_2060], timestamps=timestamps, outlines=outlines, reference_dem=1,
)

# %%
Expand All @@ -69,9 +76,11 @@
# These are saved internally, but are also returned as a list.
#
# An elevation or volume change series can automatically be generated from the ``DEMCollection``.
# In this case, we should specify *which* glacier we want the change for, as a regional value may not always be required.
# In this case, we should specify *which* glacier we want the change for,
# as a regional value may not always be required.
# We can look at the glacier called "Scott Turnerbreen", specified in the "NAME" column of the outline data.
# `See here for the outline filtering syntax <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html>`_.
# `See here for the outline filtering syntax
# <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html>`_.

demcollection.get_cumulative_series(kind="dh", outlines_filter="NAME == 'Scott Turnerbreen'")

Expand Down
3 changes: 1 addition & 2 deletions examples/advanced/plot_deramp.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
"""
Bias-correction with deramping
"""Bias-correction with deramping
==============================

Deramping can help correct rotational or doming errors in elevation data.
Expand Down
Loading