-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dlpack
interface with dpctl
fails on CPU (in both ways)
#368
Comments
dlpack
interface with dpctl
fails on CPU (in both ways)
Not sure this is workable on CPU. |
Hi @fcharras, may I know what the below codes print? import torch
import intel_extension_for_pytorch
torch.xpu.has_multi_context() I assume it will print |
I build from source using the My environment is a bit particular because I use binaries I compiled for I'll report the results on the Flex series once I've managed to overcome current issues with the |
Ok, in my opinion, the reason is the tensor created by dpctl is allocated by a different sycl context from the runtime's context in IPEX. I strongly recommend that create a tensor using torch API. |
It would have been practical for me to be able to keep the dpctl workflow I had already in place but I agree it's probably a marginal usecase at the time being. I'll be fine using exclusively the torch API. Feel free to close this issue. |
Describe the bug
From
dpctl
topytorch
:fails with:
from
pytorch
todpctl
:fails with
Should those conversions be possible ?
I cross-posted this issue with
dpctl
repository at IntelPython/dpctl#1240 .Versions
Environment informations:
The text was updated successfully, but these errors were encountered: