-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cudnn_home not valid during build #33
Comments
https://github.com/triton-inference-server/onnxruntime_backend/blob/main/tools/gen_ort_dockerfile.py#L93 |
I have encountered this issue too... perhaps it may be a good idea to allow users to enter the path for cudnn similar to cuda. |
@mfruhner : Were you able to find a workaround ? I simply updated the gen_ort_dockerfile.py script tp explicitly include the cudnn path |
@mfruhner, did you resolve it? if the build is failing then many should report but only few are here in this thread; are we doing anything wrong? I'm using |
@CoderHam https://paste.ubuntu.com/p/nF3HCcYycR/ is the |
@askhade what should be the value of |
--cudnn_home should be set to the path to cudnn libs dir |
@askhade or @GowthamKudupudi Are you able to resolve it? I am trying this in centos7. I am able to build tensorflow1, tensorflow2, python and pytorch backend. But I am getting error when I try to build backend for onnx. This is the error:
|
Can you please explicitly mention where did you edit and how did you run this? |
I didn't try to solve this any further and went on to use something else, sorry. |
@chandrameenamohan : I suppose you are hitting this issue because of "--no-container-build" can you remove it and test again? |
The comment by @CoderHam that is liked is the key |
Any known fixes on how to pass the |
The following changes to build.py did the trick (in case somebody else has come across a similar issue & looking for an easy fix):
I am not sure if |
@askhade @GowthamKudupudi $ ./build.py -v --no-container-build --build-dir=`pwd`/build --enable-all |
@aravindhank11 $ ./build.py -v --no-container-build --build-dir=`pwd`/build --enable-all |
@aravindhank11 It also works when building Triton/server/r23.04 |
@aravindhank11 Without using the patch, it also works to build Triton/sever/r23.04 without docker by below command:
|
I just want to report that this bug still exists in v23.11 and I solved it by changing There is another bug related to build.sh requiring non-privileged user for it to run, hence you want to use Then it compiles with no further problems |
v24.09
|
Description
I am not able to build the ONNX Backend. I am following the build instructions in the README but the build fails at Step 17.
Triton Information
Main Branch for Trition Version 21.02
To Reproduce
I am running DGX OS 5 (Ubuntu 20.04).
cmake -DCMAKE_INSTALL_PREFIX:PATH=
pwd/install -DTRITON_BUILD_ONNXRUNTIME_VERSION=1.6.0 -DTRITO N_BUILD_CONTAINER_VERSION=21.02 ..
make install
Output:
Step 17/24 : RUN ./build.sh ${COMMON_BUILD_ARGS} --update --build --use_cuda --cuda_home "/usr/local/cuda" ---> Running in 3360f12bb769 2021-03-18 11:01:00,463 build [ERROR] - cuda_home and cudnn_home paths must be specified and valid. cuda_home='/usr/local/cuda' valid=True. cudnn_home='None' valid=False The command '/bin/sh -c ./build.sh ${COMMON_BUILD_ARGS} --update --build --use_cuda --cuda_home "/usr/local/cuda"' returned a non-zero code: 1 make[2]: *** [CMakeFiles/ort_target.dir/build.make:81: onnxruntime/lib/libonnxruntime.so.1.6.0] Fehler 1 make[1]: *** [CMakeFiles/Makefile2:158: CMakeFiles/ort_target.dir/all] Fehler 2 make: *** [Makefile:149: all] Fehler 2
Expected behavior
I expect the build to succed.
The text was updated successfully, but these errors were encountered: