-
-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenXLA Support #908
Comments
Agreed. I'd actually go a step further and say you could dramatically simplify your architecture, while improving runtime speed, by using XLA as the only backend. I've used XLA (via JAX) quite a bit, and in my experience the code it generates is at least as fast as the code from other frameworks. But, there is one blocking issue with this approach: XLA compilation is stupidly slow and the framework doesn't provide good options for persisting compiled graphs. What do you think? |
What are your thoughts on using IREE? The IREE runtime supports a diverse range of backends such as Rocm, CUDA, WebGPU, and Metal. Additionally, it would be beneficial if the models could be AOT compiled at Rust compile time, though I'm uncertain if the current architecture of dfdx supports this. |
I've never used IREE but that also seems like a good option. Perhaps the critique should be "don't manually support various backends, instead choose an IR and target it instead". |
There's some poc xla bindings for rust. It would be really cool to integrate those: https://github.com/LaurentMazare/xla-rs into dfdx to allow for tpus and other accelerators in addition to cuda.
The text was updated successfully, but these errors were encountered: