Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA]: Introduce Python module with CCCL headers #3201

Merged
merged 79 commits into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
daab580
Add cccl/python/cuda_cccl directory and use from cuda_parallel, cuda_…
rwgk Dec 12, 2024
ef9d5f4
Run `copy_cccl_headers_to_aude_include()` before `setup()`
rwgk Dec 20, 2024
bc116dc
Create python/cuda_cccl/cuda/_include/__init__.py, then simply import…
rwgk Dec 20, 2024
2913ae0
Add cuda.cccl._version exactly as for cuda.cooperative and cuda.parallel
rwgk Dec 20, 2024
7dbb82b
Bug fix: cuda/_include only exists after shutil.copytree() ran.
rwgk Dec 20, 2024
0703901
Use `f"cuda-cccl @ file://{cccl_path}/python/cuda_cccl"` in setup.py
rwgk Dec 20, 2024
fc0e543
Remove CustomBuildCommand, CustomWheelBuild in cuda_parallel/setup.py…
rwgk Dec 20, 2024
2e64345
Replace := operator (needs Python 3.8+)
rwgk Dec 20, 2024
82467cd
Merge branch 'main' into pip-cuda-cccl
rwgk Dec 20, 2024
f13a96b
Fix oversights: remove `pip3 install ./cuda_cccl` lines from README.md
rwgk Dec 20, 2024
9ed6036
Restore original README.md: `pip3 install -e` now works on first pass.
rwgk Dec 20, 2024
c9a4d96
cuda_cccl/README.md: FOR INTERNAL USE ONLY
rwgk Dec 20, 2024
df943c0
Remove `$pymajor.$pyminor.` prefix in cuda_cccl _version.py (as sugge…
rwgk Dec 20, 2024
40c8389
Modernize pyproject.toml, setup.py
rwgk Dec 21, 2024
e3c7867
Install CCCL headers under cuda.cccl.include
rwgk Dec 21, 2024
acbd477
Merge branch 'main' into pip-cuda-cccl
rwgk Dec 21, 2024
06f575f
Factor out cuda_cccl/cuda/cccl/include_paths.py
rwgk Dec 21, 2024
e747768
Reuse cuda_cccl/cuda/cccl/include_paths.py from cuda_cooperative
rwgk Dec 21, 2024
499b191
Merge branch 'main' into pip-cuda-cccl
rwgk Dec 21, 2024
62ce2d3
Add missing Copyright notice.
rwgk Dec 21, 2024
65c5a15
Add missing __init__.py (cuda.cccl)
rwgk Dec 21, 2024
bffece6
Add `"cuda.cccl"` to `autodoc.mock_imports`
rwgk Dec 21, 2024
585447c
Move cuda.cccl.include_paths into function where it is used. (Attempt…
rwgk Dec 22, 2024
55c4311
Add # TODO: move this to a module-level import
rwgk Dec 22, 2024
1f3a029
Modernize cuda_cooperative/pyproject.toml, setup.py
rwgk Dec 26, 2024
61637d6
Convert cuda_cooperative to use hatchling as build backend.
rwgk Dec 26, 2024
4a0cca1
Revert "Convert cuda_cooperative to use hatchling as build backend."
rwgk Dec 27, 2024
7dd3d16
Move numpy from [build-system] requires -> [project] dependencies
rwgk Dec 27, 2024
efab5be
Move pyproject.toml [project] dependencies -> setup.py install_requir…
rwgk Dec 27, 2024
9fde3d1
Remove copy_license() and use license_files=["../../LICENSE"] instead.
rwgk Dec 27, 2024
bda5d51
Further modernize cuda_cccl/setup.py to use pathlib
rwgk Dec 27, 2024
4e9720d
Trivial simplifications in cuda_cccl/pyproject.toml
rwgk Dec 27, 2024
c1aea17
Further simplify cuda_cccl/pyproject.toml, setup.py: remove inconsequ…
rwgk Dec 28, 2024
d18d699
Make cuda_cooperative/pyproject.toml more similar to cuda_cccl/pyproj…
rwgk Dec 28, 2024
9be94c6
Add taplo-pre-commit to .pre-commit-config.yaml
rwgk Dec 28, 2024
c2a9f24
taplo-pre-commit auto-fixes
rwgk Dec 28, 2024
c89d620
Use pathlib in cuda_cooperative/setup.py
rwgk Dec 28, 2024
1b3599b
CCCL_PYTHON_PATH in cuda_cooperative/setup.py
rwgk Dec 28, 2024
796b741
Modernize cuda_parallel/pyproject.toml, setup.py
rwgk Dec 28, 2024
9a63830
Use pathlib in cuda_parallel/setup.py
rwgk Dec 28, 2024
477fe3b
Add `# TOML lint & format` comment.
rwgk Dec 28, 2024
246ddf7
Replace MANIFEST.in with `[tool.setuptools.package-data]` section in …
rwgk Dec 28, 2024
e1fd264
Use pathlib in cuda/cccl/include_paths.py
rwgk Dec 28, 2024
87b46ca
pre-commit autoupdate (EXCEPT clang-format, which was manually restored)
rwgk Dec 29, 2024
9597dad
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 6, 2025
eddc6cc
Fixes after git merge main
rwgk Jan 6, 2025
bcf0de8
Resolve warning: AttributeError: '_Reduce' object has no attribute 'b…
rwgk Jan 6, 2025
c763301
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 8, 2025
71fd243
Move `copy_cccl_headers_to_cuda_cccl_include()` functionality to `cla…
rwgk Jan 8, 2025
79057cf
Introduce cuda_cooperative/constraints.txt
rwgk Jan 8, 2025
ccaf8a5
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 9, 2025
46a8329
Also add cuda_parallel/constraints.txt
rwgk Jan 9, 2025
a07222b
Add `--constraint constraints.txt` in ci/test_python.sh
rwgk Jan 9, 2025
2d3c2ed
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 14, 2025
b65f510
Update Copyright dates
rwgk Jan 14, 2025
47893d9
Switch to https://github.com/ComPWA/taplo-pre-commit (the other repo …
rwgk Jan 14, 2025
324ac4f
Remove unused cuda_parallel jinja2 dependency (noticed by chance).
rwgk Jan 14, 2025
3026c81
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 14, 2025
b0d422a
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 15, 2025
e904846
Remove constraints.txt files, advertise running `pip install cuda-ccc…
rwgk Jan 15, 2025
c1f571d
Make cuda_cooperative, cuda_parallel testing completely independent.
rwgk Jan 15, 2025
792e4ba
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 15, 2025
695cc9b
Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc]
rwgk Jan 15, 2025
ea33a21
Try using another runner (because V100 runners seem to be stuck) [ski…
rwgk Jan 15, 2025
d439f79
Fix sign-compare warning (#3408) [skip-rapids][skip-matx][skip-docs][…
bernhardmgruber Jan 15, 2025
9a7b498
Revert "Try using another runner (because V100 runners seem to be stu…
rwgk Jan 15, 2025
5d33bb0
Merge branch 'main' into pip-cuda-cccl [skip-rapids][skip-matx][skip-…
rwgk Jan 15, 2025
be34834
Try using A100 runner (because V100 runners still seem to be stuck) […
rwgk Jan 15, 2025
b2b2b5b
Also show cuda-cooperative site-packages, cuda-parallel site-packages…
rwgk Jan 15, 2025
9f83b0d
Try using l4 runner (because V100 runners still seem to be stuck) [sk…
rwgk Jan 15, 2025
4807a79
Restore original ci/matrix.yaml [skip-rapids]
rwgk Jan 16, 2025
d97a68a
Use for loop in test_python.sh to avoid code duplication.
rwgk Jan 16, 2025
ec206fd
Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc]…
rwgk Jan 15, 2025
1f4d210
Merge branch 'main' into pip-cuda-cccl [skip-rapids][skip-matx][skip-…
rwgk Jan 16, 2025
f94bbb1
Comment out taplo-lint in pre-commit config [skip-rapids][skip-matx][…
rwgk Jan 16, 2025
b48f866
Revert "Run only test_python.sh [skip-rapids][skip-matx][skip-docs][s…
rwgk Jan 16, 2025
917147f
Implement suggestion by @shwina (https://github.com/NVIDIA/cccl/pull/…
rwgk Jan 16, 2025
ebdbb22
Merge branch 'main' into pip-cuda-cccl
rwgk Jan 16, 2025
12dbf29
Address feedback by @leofang
rwgk Jan 16, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,17 @@ repos:
hooks:
- id: ruff # linter
- id: ruff-format # formatter

# TOML lint & format
- repo: https://github.com/ComPWA/taplo-pre-commit
rev: v0.9.3
hooks:
# See https://github.com/NVIDIA/cccl/issues/3426
# - id: taplo-lint
# exclude: "^docs/"
- id: taplo-format
exclude: "^docs/"

- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
Expand Down
33 changes: 18 additions & 15 deletions ci/test_python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,28 @@ print_environment_details

fail_if_no_gpu

readonly prefix="${BUILD_DIR}/python/"
export PYTHONPATH="${prefix}:${PYTHONPATH:-}"
begin_group "⚙️ Existing site-packages"
pip freeze
end_group "⚙️ Existing site-packages"

pushd ../python/cuda_cooperative >/dev/null
for module in cuda_parallel cuda_cooperative; do

run_command "⚙️ Pip install cuda_cooperative" pip install --force-reinstall --upgrade --target "${prefix}" .[test]
run_command "🚀 Pytest cuda_cooperative" python -m pytest -v ./tests
pushd "../python/${module}" >/dev/null

popd >/dev/null
TEMP_VENV_DIR="/tmp/${module}_venv"
rm -rf "${TEMP_VENV_DIR}"
python -m venv "${TEMP_VENV_DIR}"
. "${TEMP_VENV_DIR}/bin/activate"
echo 'cuda-cccl @ file:///home/coder/cccl/python/cuda_cccl' > /tmp/cuda-cccl_constraints.txt
run_command "⚙️ Pip install ${module}" pip install -c /tmp/cuda-cccl_constraints.txt .[test]
begin_group "⚙️ ${module} site-packages"
pip freeze
end_group "⚙️ ${module} site-packages"
run_command "🚀 Pytest ${module}" python -m pytest -v ./tests
deactivate

pushd ../python/cuda_parallel >/dev/null
popd >/dev/null

# Temporarily install the package twice to populate include directory as part of the first installation
# and to let manifest discover these includes during the second installation. Do not forget to remove the
# second installation after https://github.com/NVIDIA/cccl/issues/2281 is addressed.
run_command "⚙️ Pip install cuda_parallel once" pip install --force-reinstall --upgrade --target "${prefix}" .[test]
run_command "⚙️ Pip install cuda_parallel twice" pip install --force-reinstall --upgrade --target "${prefix}" .[test]
run_command "🚀 Pytest cuda_parallel" python -m pytest -v ./tests

popd >/dev/null
done

print_time_summary
2 changes: 2 additions & 0 deletions ci/update_version.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ CUB_CMAKE_VERSION_FILE="lib/cmake/cub/cub-config-version.cmake"
LIBCUDACXX_CMAKE_VERSION_FILE="lib/cmake/libcudacxx/libcudacxx-config-version.cmake"
THRUST_CMAKE_VERSION_FILE="lib/cmake/thrust/thrust-config-version.cmake"
CUDAX_CMAKE_VERSION_FILE="lib/cmake/cudax/cudax-config-version.cmake"
CUDA_CCCL_VERSION_FILE="python/cuda_cccl/cuda/cccl/_version.py"
CUDA_COOPERATIVE_VERSION_FILE="python/cuda_cooperative/cuda/cooperative/_version.py"
CUDA_PARALLEL_VERSION_FILE="python/cuda_parallel/cuda/parallel/_version.py"

Expand Down Expand Up @@ -110,6 +111,7 @@ update_file "$CUDAX_CMAKE_VERSION_FILE" "set(cudax_VERSION_MAJOR \([0-9]\+\))" "
update_file "$CUDAX_CMAKE_VERSION_FILE" "set(cudax_VERSION_MINOR \([0-9]\+\))" "set(cudax_VERSION_MINOR $minor)"
update_file "$CUDAX_CMAKE_VERSION_FILE" "set(cudax_VERSION_PATCH \([0-9]\+\))" "set(cudax_VERSION_PATCH $patch)"

update_file "$CUDA_CCCL_VERSION_FILE" "^__version__ = \"\([0-9.]\+\)\"" "__version__ = \"$major.$minor.$patch\""
update_file "$CUDA_COOPERATIVE_VERSION_FILE" "^__version__ = \"\([0-9.]\+\)\"" "__version__ = \"$pymajor.$pyminor.$major.$minor.$patch\""
update_file "$CUDA_PARALLEL_VERSION_FILE" "^__version__ = \"\([0-9.]\+\)\"" "__version__ = \"$pymajor.$pyminor.$major.$minor.$patch\""

Expand Down
1 change: 1 addition & 0 deletions docs/repo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,7 @@ autodoc.mock_imports = [
"numba",
"pynvjitlink",
"cuda.bindings",
"cuda.cccl",
"llvmlite",
"numpy",
]
Expand Down
2 changes: 2 additions & 0 deletions python/cuda_cccl/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
cuda/cccl/include
*egg-info
3 changes: 3 additions & 0 deletions python/cuda_cccl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
## Note

This package is currently FOR INTERNAL USE ONLY and not meant to be used/installed explicitly.
8 changes: 8 additions & 0 deletions python/cuda_cccl/cuda/cccl/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
#
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

from cuda.cccl._version import __version__
from cuda.cccl.include_paths import get_include_paths

shwina marked this conversation as resolved.
Show resolved Hide resolved
__all__ = ["__version__", "get_include_paths"]
7 changes: 7 additions & 0 deletions python/cuda_cccl/cuda/cccl/_version.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
#
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

# This file is generated by ci/update_version.sh
# Do not edit this file manually.
__version__ = "2.8.0"
63 changes: 63 additions & 0 deletions python/cuda_cccl/cuda/cccl/include_paths.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
#
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

import os
import shutil
from dataclasses import dataclass
from functools import lru_cache
from pathlib import Path
from typing import Optional


def _get_cuda_path() -> Optional[Path]:
cuda_path = os.environ.get("CUDA_PATH")
if cuda_path:
cuda_path = Path(cuda_path)
if cuda_path.exists():
return cuda_path

nvcc_path = shutil.which("nvcc")
if nvcc_path:
return Path(nvcc_path).parent.parent

default_path = Path("/usr/local/cuda")
if default_path.exists():
return default_path

return None


@dataclass
class IncludePaths:
cuda: Optional[Path]
libcudacxx: Optional[Path]
cub: Optional[Path]
thrust: Optional[Path]

def as_tuple(self):
# Note: higher-level ... lower-level order:
return (self.thrust, self.cub, self.libcudacxx, self.cuda)


@lru_cache()
def get_include_paths() -> IncludePaths:
# TODO: once docs env supports Python >= 3.9, we
# can move this to a module-level import.
from importlib.resources import as_file, files

cuda_incl = None
cuda_path = _get_cuda_path()
if cuda_path is not None:
cuda_incl = cuda_path / "include"

with as_file(files("cuda.cccl.include")) as f:
cccl_incl = Path(f)
assert cccl_incl.exists()

return IncludePaths(
cuda=cuda_incl,
libcudacxx=cccl_incl / "libcudacxx",
cub=cccl_incl,
thrust=cccl_incl,
)
29 changes: 29 additions & 0 deletions python/cuda_cccl/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
#
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

[build-system]
requires = ["setuptools>=61.0.0"]
build-backend = "setuptools.build_meta"

[project]
name = "cuda-cccl"
description = "Experimental Package with CCCL headers to support JIT compilation"
authors = [{ name = "NVIDIA Corporation" }]
classifiers = [
"Programming Language :: Python :: 3 :: Only",
"Environment :: GPU :: NVIDIA CUDA",
"License :: OSI Approved :: Apache Software License",
]
requires-python = ">=3.9"
dynamic = ["version", "readme"]

[project.urls]
Homepage = "https://github.com/NVIDIA/cccl"

[tool.setuptools.dynamic]
version = { attr = "cuda.cccl._version.__version__" }
readme = { file = ["README.md"], content-type = "text/markdown" }

[tool.setuptools.package-data]
cuda = ["cccl/include/**/*"]
51 changes: 51 additions & 0 deletions python/cuda_cccl/setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
#
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

import shutil
from pathlib import Path

from setuptools import setup
from setuptools.command.build_py import build_py

PROJECT_PATH = Path(__file__).resolve().parent
CCCL_PATH = PROJECT_PATH.parents[1]


class CustomBuildPy(build_py):
"""Copy CCCL headers BEFORE super().run()

Note that the CCCL headers cannot be referenced directly:
setuptools (and pyproject.toml) does not support relative paths that
reference files outside the package directory (like ../../).
This is a restriction designed to avoid inadvertently packaging files
that are outside the source tree.
"""

def run(self):
cccl_headers = [
("cub", "cub"),
("libcudacxx", "include"),
("thrust", "thrust"),
]

inc_path = PROJECT_PATH / "cuda" / "cccl" / "include"
inc_path.mkdir(parents=True, exist_ok=True)

for proj_dir, header_dir in cccl_headers:
src_path = CCCL_PATH / proj_dir / header_dir
dst_path = inc_path / proj_dir
if dst_path.exists():
shutil.rmtree(dst_path)
shutil.copytree(src_path, dst_path)

init_py_path = inc_path / "__init__.py"
init_py_path.write_text("# Intentionally empty.\n")

super().run()


setup(
license_files=["../../LICENSE"],
cmdclass={"build_py": CustomBuildPy},
)
1 change: 0 additions & 1 deletion python/cuda_cooperative/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
cuda/_include
env
*egg-info
1 change: 0 additions & 1 deletion python/cuda_cooperative/MANIFEST.in

This file was deleted.

1 change: 1 addition & 0 deletions python/cuda_cooperative/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Please visit the documentation here: https://nvidia.github.io/cccl/python.html.
## Local development

```bash
pip3 install -e ../cuda_cccl
pip3 install -e .[test]
pytest -v ./tests/
```
46 changes: 9 additions & 37 deletions python/cuda_cooperative/cuda/cooperative/experimental/_nvrtc.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

import functools
import importlib.resources as pkg_resources
import os
import shutil

from cuda.bindings import nvrtc
from cuda.cooperative.experimental._caching import disk_cache
Expand All @@ -20,22 +17,6 @@ def CHECK_NVRTC(err, prog):
raise RuntimeError(f"NVRTC error: {log.decode('ascii')}")


def get_cuda_path():
cuda_path = os.environ.get("CUDA_PATH", "")
if os.path.exists(cuda_path):
return cuda_path

nvcc_path = shutil.which("nvcc")
if nvcc_path is not None:
return os.path.dirname(os.path.dirname(nvcc_path))

default_path = "/usr/local/cuda"
if os.path.exists(default_path):
return default_path

return None


# cpp is the C++ source code
# cc = 800 for Ampere, 900 Hopper, etc
# rdc is true or false
Expand All @@ -47,24 +28,15 @@ def compile_impl(cpp, cc, rdc, code, nvrtc_path, nvrtc_version):
check_in("rdc", rdc, [True, False])
check_in("code", code, ["lto", "ptx"])

with pkg_resources.path("cuda", "_include") as include_path:
# Using `.parent` for compatibility with pip install --editable:
include_path = pkg_resources.files("cuda.cooperative").parent.joinpath(
"_include"
)
cub_path = include_path
thrust_path = include_path
libcudacxx_path = os.path.join(include_path, "libcudacxx")
cuda_include_path = os.path.join(get_cuda_path(), "include")

opts = [
b"--std=c++17",
bytes(f"--include-path={cub_path}", encoding="ascii"),
bytes(f"--include-path={thrust_path}", encoding="ascii"),
bytes(f"--include-path={libcudacxx_path}", encoding="ascii"),
bytes(f"--include-path={cuda_include_path}", encoding="ascii"),
bytes(f"--gpu-architecture=compute_{cc}", encoding="ascii"),
]
opts = [b"--std=c++17"]

# TODO: move this to a module-level import (after docs env modernization).
from cuda.cccl import get_include_paths

for path in get_include_paths().as_tuple():
if path is not None:
opts += [f"--include-path={path}".encode("ascii")]
opts += [f"--gpu-architecture=compute_{cc}".encode("ascii")]
if rdc:
opts += [b"--relocatable-device-code=true"]

Expand Down
34 changes: 32 additions & 2 deletions python/cuda_cooperative/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,11 +1,41 @@
# Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. ALL RIGHTS RESERVED.
#
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

[build-system]
requires = ["packaging", "setuptools>=61.0.0", "wheel"]
requires = ["setuptools>=61.0.0"]
build-backend = "setuptools.build_meta"

[project]
name = "cuda-cooperative"
description = "Experimental Core Library for CUDA Python"
authors = [{ name = "NVIDIA Corporation" }]
classifiers = [
"Programming Language :: Python :: 3 :: Only",
"Environment :: GPU :: NVIDIA CUDA",
"License :: OSI Approved :: Apache Software License",
]
requires-python = ">=3.9"
dependencies = [
"cuda-cccl",
"numpy",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a good way to declare version constraint for cuda-cccl statically, I suspect we will need to move dependencies to setup.py's install_requires, let us do this in another PR

"numba>=0.60.0",
"pynvjitlink-cu12>=0.2.4",
"cuda-python==12.*",
"jinja2",
]
dynamic = ["version", "readme"]

leofang marked this conversation as resolved.
Show resolved Hide resolved
[project.optional-dependencies]
test = ["pytest", "pytest-xdist"]

[project.urls]
Homepage = "https://developer.nvidia.com/"

[tool.setuptools.dynamic]
version = { attr = "cuda.cooperative._version.__version__" }
readme = { file = ["README.md"], content-type = "text/markdown" }

[tool.ruff]
extend = "../../pyproject.toml"

Expand Down
Loading
Loading