Replies: 1 comment
-
Issue Summary Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility Environment Details OS: Ubuntu Linux Root Cause The CUDA library libcublasLt.so.12 requires GLIBC symbols: log2f@GLIBC_2.27 (from GLIBC 2.27+) Build Command Used CMAKE_ARGS=“-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native” pip install -e . --verbose What’s Been Tried Using system CUDA instead of conda CUDA - same error How to resolve GLIBC version conflicts when building llama-cpp-python with CUDA? The build progresses successfully until the final linking stage for vision tools |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I was able to install torch using:
pip install --pre torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
This shows support.
(base) oba@astropema:
/gguf$ nvcc --list-gpu-code/gguf$sm_50
sm_52
sm_53
sm_60
sm_61
sm_62
sm_70
sm_72
sm_75
sm_80
sm_86
sm_87
sm_89
sm_90
sm_100
sm_101
sm_103
sm_120
sm_121
(base) oba@astropema:
The llama_ccp compile keeps faling on me, and i dont want to break whats working. The pip install was going to install and older lib.
The pronblem is that i need llama_ccp with that version of GPU.
Regards,
oba
Beta Was this translation helpful? Give feedback.
All reactions