Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blackwell version with cuda 12.8 please! #2519

Open
dirtmonster1337 opened this issue Mar 18, 2025 · 3 comments
Open

Blackwell version with cuda 12.8 please! #2519

dirtmonster1337 opened this issue Mar 18, 2025 · 3 comments

Comments

@dirtmonster1337
Copy link

Hello!

Some of us have 50xx cards that use Blackwell/sm_120 architecture and we can't use RVC until this is fixed :(

What we need

-the embedded python/pytorch to use the 'nightly' build from

https://pytorch.org/
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

Could someone please fork it or fix it using this pytorch?

@sakatatoushirou
Copy link

sakatatoushirou commented Mar 20, 2025

You can first install the nightly version of torch yourself.
First, uninstall the pre-installed version.

.\runtime\python.exe -m pip uninstall torch torchaudio torchvision -y

Then install the torchcu128 version.

.\runtime\python.exe -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

@dirtmonster1337
Copy link
Author

WARNING: The script isympy.exe is installed in .RVC1006Nvidia\runtime\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts torchfrtrace.exe and torchrun.exe are installed in .\RVC1006Nvidia\runtime\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torch-directml 0.2.0.dev230426 requires torch==2.0.0, but you have torch 2.8.0.dev20250323+cu128 which is incompatible.
torch-directml 0.2.0.dev230426 requires torchvision==0.15.1, but you have torchvision 0.22.0.dev20250324+cu128 which is incompatible.
xformers 0.0.19 requires torch==2.0.0, but you have torch 2.8.0.dev20250323+cu128 which is incompatible.

Any thoughts on this?

@dirtmonster1337
Copy link
Author

on load it says

WARNING | xformers | WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.8.0.dev20250323+cu128)
Python 3.9.13 (you have 3.9.13)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
2025-03-24 09:25:22 | INFO | configs.config | Found GPU NVIDIA GeForce RTX 5080
is_half:True, device:cuda:0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants