Skip to content

Commit

Permalink
Merge pull request #68 from ayrna/development
Browse files Browse the repository at this point in the history
New package version 2.0.0
  • Loading branch information
franberchez authored Apr 26, 2024
2 parents 96dcd71 + 2d08a93 commit 48e2e9c
Show file tree
Hide file tree
Showing 83 changed files with 2,731 additions and 1,631 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ body:
placeholder: |
```python
Your code here. Placing the code snippet here will help us reproduce the bug and identify the issue.
```
Place your code here. Placing the code snippet here will help us reproduce the bug and identify the issue.
```
- type: textarea
attributes:
Expand Down
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/other_issue.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
name: ❗ Other issue
description: Other issue not covered by the other templates.
title: "[MNT] "
labels: ["maintenance"]

body:
- type: markdown
Expand Down
10 changes: 6 additions & 4 deletions .github/workflows/run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ on:
pull_request:
branches:
- main
- development
paths:
- "tutorials/**"
- "dlordinal/**"
Expand All @@ -22,10 +23,10 @@ jobs:

steps:
- name: Checkout repository
uses: actions/checkout@v2
uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: 3.8

Expand All @@ -46,10 +47,10 @@ jobs:

steps:
- name: Checkout repository
uses: actions/checkout@v2
uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: 3.8

Expand All @@ -63,6 +64,7 @@ jobs:
- name: Run tests for codecov
run: |
pytest --cov=dlordinal --cov-report=xml
timeout-minutes: 20

- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
Expand Down
7 changes: 7 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"python.testing.pytestArgs": [
"dlordinal"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}
91 changes: 84 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,100 @@
# Deep learning utilities library

`dlordinal` is an open-source Python toolkit focused on deep learning with ordinal methodologies. It is compatible with
[scikit-learn](https://scikit-learn.org).

The library includes various modules such as loss functions, models, layers, metrics, and an estimator.
`dlordinal` is an open-source Python toolkit focused on deep learning with ordinal methodologies.

| Overview | |
|-----------|------------------------------------------------------------------------------------------------------------------------------------------|
| **CI/CD** | [![!codecov](https://img.shields.io/codecov/c/github/ayrna/dlordinal?label=codecov&logo=codecov)](https://codecov.io/gh/ayrna/dlordinal) [![!docs](https://readthedocs.org/projects/dlordinal/badge/?version=latest&style=flat)](https://dlordinal.readthedocs.io/en/latest/) [![!python](https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10-blue)](https://www.python.org/) |
| **Code** | [![![pypi]](https://img.shields.io/pypi/v/dlordinal)](https://pypi.org/project/dlordinal/1.0.1/) [![![binder]](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ayrna/dlordinal/main?filepath=tutorials) [![!black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Linter: Ruff](https://img.shields.io/badge/Linter-Ruff-brightgreen?style=flat-square)](https://github.com/charliermarsh/ruff) |
| **Code** | [![![pypi]](https://img.shields.io/pypi/v/dlordinal)](https://pypi.org/project/dlordinal/2.0.0/) [![![binder]](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ayrna/dlordinal/main?filepath=tutorials) [![!black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Linter: Ruff](https://img.shields.io/badge/Linter-Ruff-brightgreen?style=flat-square)](https://github.com/charliermarsh/ruff) |


## Table of Contents
- [⚙️ Installation](#%EF%B8%8F-installation)
- [📖 Documentation](#-documentation)
- [Collaborating](#collaborating)
- [Guidelines for code contributions](#guidelines-for-code-contributions)

## ⚙️ Installation

`dlordinal v1.0.1` is the last version supported by Python 3.8, Python 3.9 and Python 3.10.
`dlordinal v2.0.0` is the last version supported by Python 3.8, Python 3.9 and Python 3.10.

The easiest way to install `dlordinal` is via `pip`:

pip install dlordinal
```bash
pip install dlordinal
```

## 📖 Documentation

`Sphinx` is a documentation generator tool that is commonly used in the Python ecosystem. It allows developers to write documentation in a markup language called reStructuredText (reST) and generates HTML, PDF, and other formats from it. Sphinx provides a powerful and flexible way to document code, making it easier for developers to create comprehensive and user-friendly documentation for their projects.

To document `dlordinal`, it is necessary to install all documentation dependencies:

```bash
pip install -e '.[docs]'
```

Then access the `docs/` directory:

```bash
docs/
↳ api.rst
↳ conf.py
↳ distributions.rst
↳ references.bib
↳ ...
```

If a new module is created in the software project, the `api.rst` file must be modified to include the name of the new module:

```plaintext
.. _api:
=============
API Reference
=============
This is the API for the **dlordinal** package.
.. toctree::
:maxdepth: 2
:caption: Contents:
losses
datasets
distributions
layers
metrics
sklearn_integration
***NEW_MODULE***
```

Afterwards, a new file in `.rst` format associated to the new module must be created, specifying the automatic inclusion of documentation from the module files containing a docstring, and the inclusion of the bibliography if it exists within any of them.

```bash
docs/
↳ api.rst
↳ conf.py
↳ distributions.rst
↳ new_module.rst
↳ references.bib
↳ ...
```

```plaintext
.. _new_module:
New Module
==========
.. automodule:: dlordinal.new_module
:members:
.. footbibliography::
```

Finally, if any new bibliographic citations have been added, they should be included in the `references.bib` file.

## Collaborating

Expand Down
1 change: 1 addition & 0 deletions build_tools/run_tutorials.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ CMD="jupyter nbconvert --to notebook --inplace --execute --ExecutePreprocessor.t

excluded=(
"tutorials/datasets_tutorial.ipynb"
"tutorials/dlordinal_with_skorch_tutorial.ipynb"
)

shopt -s lastpipe
Expand Down
16 changes: 16 additions & 0 deletions dlordinal/.vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
// Use IntelliSense para saber los atributos posibles.
// Mantenga el puntero para ver las descripciones de los existentes atributos.
// Para más información, visite: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: archivo actual",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}
7 changes: 7 additions & 0 deletions dlordinal/.vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"python.testing.pytestArgs": [
"datasets"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}
2 changes: 1 addition & 1 deletion dlordinal/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
__all__ = []
__version__ = "1.0.1"
__version__ = "2.0.0"
52 changes: 25 additions & 27 deletions dlordinal/datasets/adience.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,31 @@
class Adience:
"""
Base class for the Adience dataset.
Parameters
----------
extract_file_path : Union[str, Path]
Path to the tar.gz file containing the dataset.
folds_path : Union[str, Path]
Path to the folder containing the folds.
images_path : Union[str, Path]
Path to the folder containing the images.
transformed_images_path : Union[str, Path]
Path to the folder containing the transformed images.
partition_path : Union[str, Path]
Path to the folder containing the partitions.
number_partitions : int, optional
Number of partitions to create, by default 20.
ranges : list, optional
List of age ranges to use, by default [(0, 2), (4, 6), (8, 13),
(15, 20), (25, 32), (38, 43), (48, 53), (60, 100)].
test_size : float, optional
Test size, by default 0.2.
extract : bool, optional
Boolean indicating if the tar.gz file should be extracted, by default True.
transfrom : bool, optional
Boolean indicating if the images should be transformed and the partitions
created, by default True.
"""

def __init__(
Expand All @@ -37,33 +62,6 @@ def __init__(
extract: bool = True,
transfrom: bool = True,
) -> None:
"""
Parameters
----------
extract_file_path : Union[str, Path]
Path to the tar.gz file containing the dataset.
folds_path : Union[str, Path]
Path to the folder containing the folds.
images_path : Union[str, Path]
Path to the folder containing the images.
transformed_images_path : Union[str, Path]
Path to the folder containing the transformed images.
partition_path : Union[str, Path]
Path to the folder containing the partitions.
number_partitions : int, optional
Number of partitions to create, by default 20.
ranges : list, optional
List of age ranges to use, by default [(0, 2), (4, 6), (8, 13),
(15, 20), (25, 32), (38, 43), (48, 53), (60, 100)].
test_size : float, optional
Test size, by default 0.2.
extract : bool, optional
Boolean indicating if the tar.gz file should be extracted, by default True.
transfrom : bool, optional
Boolean indicating if the images should be transformed and the partitions
created, by default True.
"""

super().__init__()

self.extract_file_path = Path(extract_file_path)
Expand Down
5 changes: 5 additions & 0 deletions dlordinal/datasets/featuredataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,12 @@ def get_valid_shape_array(self, v: ArrayLike):
Parameters
----------
v : ArrayLike
Input array.
Returns
-------
v : np.ndarray
2D numpy array with shape (n, 1).
"""

if isinstance(v, pd.Series):
Expand Down
42 changes: 19 additions & 23 deletions dlordinal/datasets/fgnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,25 @@
class FGNet(VisionDataset):
"""
Base class for FGNet dataset.
Parameters
----------
root : str or Path
Root directory of dataset
download : bool, optional
If True, downloads the dataset from the internet and puts it in root directory.
If dataset is already downloaded, it is not downloaded again.
process_data : bool, optional
If True, processes the dataset and puts it in root directory.
If dataset is already processed, it is not processed again.
target_size : tuple, optional
Size of the images after resizing.
categories : list, optional
List of categories to be used.
test_size : float, optional
Size of the test set.
validation_size : float, optional
Size of the validation set.
"""

def __init__(
Expand All @@ -29,29 +48,6 @@ def __init__(
test_size: float = 0.2,
validation_size: float = 0.15,
) -> None:
"""
FGNet dataset.
Parameters
----------
root : str or Path
Root directory of dataset
download : bool, optional
If True, downloads the dataset from the internet and puts it in root directory.
If dataset is already downloaded, it is not downloaded again.
process_data : bool, optional
If True, processes the dataset and puts it in root directory.
If dataset is already processed, it is not processed again.
target_size : tuple, optional
Size of the images after resizing.
categories : list, optional
List of categories to be used.
test_size : float, optional
Size of the test set.
validation_size : float, optional
Size of the validation set.
"""

super(FGNet, self).__init__(root)

self.root = Path(self.root)
Expand Down
2 changes: 1 addition & 1 deletion dlordinal/datasets/tests/test_adience.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import pytest
from PIL import Image

from ..adience import Adience
from dlordinal.datasets import Adience

temp_dir = None

Expand Down
2 changes: 1 addition & 1 deletion dlordinal/datasets/tests/test_featuredataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import pytest
import torch

from ..featuredataset import FeatureDataset
from dlordinal.datasets import FeatureDataset


@pytest.fixture
Expand Down
2 changes: 1 addition & 1 deletion dlordinal/datasets/tests/test_fgnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

import pytest

from ..fgnet import FGNet
from dlordinal.datasets import FGNet

TMP_DIR = "./tmp_test_dir_fgnet"

Expand Down
15 changes: 0 additions & 15 deletions dlordinal/distributions/__init__.py

This file was deleted.

Loading

0 comments on commit 48e2e9c

Please sign in to comment.