FAQ: Can I use this as a drop in replacement for pytorch? #12
-
Awesome work. I have question. Can this be used as a drop in replacement for pytorch? My expectation is that it would work nice on systems that don't have cuda, but where opencl is supported (AMD,ARM). I also hope that it will be able change lines like these: checkpoint = load_checkpoint(pth, torch.device("cuda")) or checkpoint = load_checkpoint(pth, torch.device("opencl")) Please let me know, would love to build and try. |
Beta Was this translation helpful? Give feedback.
Replies: 11 comments 60 replies
-
Yes. However it is early version and lot of the operator/functions aren't implemented yet. |
Beta Was this translation helpful? Give feedback.
-
I need to start working on integration with torch 1.12 ASAP - it should allow using backend without changes in the torch code itself. |
Beta Was this translation helpful? Give feedback.
-
Yes this is right. Built in OpenCL does not work it was more like placeholder. Now once I integrate with 1.12 that provides support of private devices you'll be able just to use 'prinvatedeice1' or something like that need to test - with vanilla pytorch. It would simplify things |
Beta Was this translation helpful? Give feedback.
-
DLPrim provides initial basic ONNX support. It tested on set of standard vision models (still limited but works) so I thing it would be much better since ONNX allows to disconnect inference from training. |
Beta Was this translation helpful? Give feedback.
-
Not familiar with it.
Give it a shut. Currently my general directions are:
|
Beta Was this translation helpful? Give feedback.
-
Unfortunately CAFFE is dead. I like it a lot especially for its simple and easy to use code base but it isn't for up-to date standards especially in memory management. However it still provides complete Caffe + OpenCL implementation that works quite nicely. |
Beta Was this translation helpful? Give feedback.
-
I started working with pytorch nighly and you don't need to build pytorch any more, see It is early and some nets fail, but this is the direction. |
Beta Was this translation helpful? Give feedback.
-
Now today or yesterday pytorch 1.13 was released and it is the version you can use (and 1.14 nighty as well) The difference that in 1.13 you must refer to opencl device by name as |
Beta Was this translation helpful? Give feedback.
-
I merged updated pytorch support to master - so you can use stable 1.13 version with Also I tested the setup on Windows now you can use it on Windows as well. |
Beta Was this translation helpful? Give feedback.
-
So I tried to make OpenAI's whisper work with dlprim and I got this error. I am unsure how to fix.
|
Beta Was this translation helpful? Give feedback.
-
Did you clone the repository recursively?
Artyom
On Saturday, December 3, 2022, 08:41:56 AM GMT+2, Evgeny Kurnevsky ***@***.***> wrote:
Just tried to build the latest version but got:
pytorch-dlprim> CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
pytorch-dlprim> Please set them or make sure they are set and tested correctly in the CMake files:
pytorch-dlprim> DLPRIM_INC
pytorch-dlprim> used as include directory in directory /build/source
pytorch-dlprim> used as include directory in directory /build/source
pytorch-dlprim> used as include directory in directory /build/source
pytorch-dlprim> DLPRIM_LIB
pytorch-dlprim> linked by target "pt_ocl" in directory /build/source
pytorch-dlprim>
pytorch-dlprim> CMake Error in CMakeLists.txt:
pytorch-dlprim> Found relative path while evaluating include directories of "pt_ocl":
pytorch-dlprim> "DLPRIM_INC-NOTFOUND"
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Yes.
However it is early version and lot of the operator/functions aren't implemented yet.