Skip to content

How to use Intel dGPU to finetune model? e.g. LoRA or Qlora #806

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
JamieVC opened this issue Apr 8, 2025 · 2 comments
Open

How to use Intel dGPU to finetune model? e.g. LoRA or Qlora #806

JamieVC opened this issue Apr 8, 2025 · 2 comments
Assignees
Labels
CPU CPU specific issues Functionality LLM XPU/GPU XPU/GPU specific issues

Comments

@JamieVC
Copy link

JamieVC commented Apr 8, 2025

Describe the issue

is that implemented in IPEX to finetune model with Intel hardware? dGPU, or Xeon, Gaudi..etc Thanks!

@unrahul
Copy link

unrahul commented Apr 8, 2025

Ipex supports XPUs (intel GPUs) and CPUs:

Here you go, both LoRA and QLoRA is supported: https://github.com/intel/intel-extension-for-pytorch/tree/release/xpu/2.6.10/examples/gpu/llm/fine-tuning

On Gaudi, please check out optimum-habana library

@wangkl2 wangkl2 self-assigned this Apr 9, 2025
@wangkl2 wangkl2 added CPU CPU specific issues XPU/GPU XPU/GPU specific issues Functionality LLM labels Apr 9, 2025
@wangkl2
Copy link
Member

wangkl2 commented Apr 9, 2025

@JamieVC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CPU CPU specific issues Functionality LLM XPU/GPU XPU/GPU specific issues
Projects
None yet
Development

No branches or pull requests

3 participants