-
Notifications
You must be signed in to change notification settings - Fork 271
UR error during ComfyUI's tutorial with Stable Diffusion model on Ubuntu with TigerLake-H GT1 #781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@ValentinaGalataTNG: |
.. and: Are u really using the iGPU of the processor? |
Hi @RogerWeihrauch, Thanks for your reply!
No, I have not. According to the Intel Extension for PyTorch website, the supported Python versions are 3.9, 3.10, 3.11, 3.12. I went with the latest, i.e., 3.12, since this is also the recommended version for ComfyUI (see here):
Regarding your second comment:
That's a good point.... When running the small Python script below with setting import torch
torch.manual_seed(0)
x = torch.rand(47000, 47000, dtype=torch.float32, device='xpu') If I use torch.xpu.device_count()
# 1
torch.xpu.get_device_properties()
# _XpuDeviceProperties(name='Intel(R) UHD Graphics', platform_name='Intel(R) oneAPI Unified Runtime over Level-Zero', type='gpu', driver_version='1.3.29735+27', total_memory=29456MB, max_compute_units=32, gpu_eu_count=32, gpu_subslice_count=2, max_work_group_size=512, max_num_sub_groups=64, sub_group_sizes=[8 16 32], has_fp16=1, has_fp64=0, has_atomic64=1)
torch.xpu.get_device_capability()
# {'driver_version': '1.3.29735+27', 'gpu_eu_count': 32, 'gpu_subslice_count': 2, 'has_atomic64': True, 'has_bfloat16_conversions': False, 'has_fp16': True, 'has_fp64': False, 'has_subgroup_2d_block_io': False, 'has_subgroup_matrix_multiply_accumulate': False, 'has_subgroup_matrix_multiply_accumulate_tensor_float32': False, 'max_compute_units': 32, 'max_num_sub_groups': 64, 'max_work_group_size': 512, 'name': 'Intel(R) UHD Graphics', 'platform_name': 'Intel(R) oneAPI Unified Runtime over Level-Zero', 'sub_group_sizes': [8, 16, 32], 'total_memory': 30887301120, 'type': 'gpu', 'vendor': 'Intel(R) Corporation', 'version': '12.0.0'} So, it does seem to see the GPU and the available memory is 30 Gb which is the size of my RAM. I am suspecting that my GPU is a GPU without its own memory which would explain the memory allocation. I have not been able to find the official Intel documentation on this, but have read some people saying that on StackOverflow. I have not much experience working with GPUs: would that explain the problems with ComfyUI? Obviously, the GPU is there and can be used, as the small Python script above using |
Hi @ValentinaGalataTNG
|
Maybe also an update may help by: git pull.
|
Hi @RogerWeihrauch, Thanks again for your suggestions. ✔ I did a
Yes, I am quite sure. I use a single Python venv to install ComfyUI dependencies and intel-extension. See also my notes below.
I removed the previously created venv and created a new one with the following modifications:
# python version
pyenv install 3.11
pyenv local 3.11
# create virtual env
python -m venv comfyui-venv
source comfyui-venv/bin/activate
# python version check
python --version # Python 3.11.11
which python # ${PWD}/comfyui-venv/bin/python
# update pip
pip install --upgrade pip
# install ComfyUI requirements
pip install -r requirements.txt
# https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.5.10%20xpu&os=linux/wsl2&package=pip
python -m pip install torch==2.5.1+cxx11.abi torchvision==0.20.1+cxx11.abi torchaudio==2.5.1+cxx11.abi intel-extension-for-pytorch==2.5.10+xpu oneccl_bind_pt==2.5.0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# check package versions installed in last step
pip list
# truncated output
# Package Version
# --------------------------- ----------------
# intel_extension_for_pytorch 2.5.10+xpu
# oneccl-bind-pt 2.5.0+xpu
# torch 2.5.1+cxx11.abi
# torchaudio 2.5.1+cxx11.abi
# torchvision 0.20.1+cxx11.abi With this venv still being activated, I ran python main.py but still got the same error. 🫤 |
@ValentinaGalataTNG:
Definite Guide to setup CoomfyUI for INTEL ARC A770 GPU:
|
@ValentinaGalataTNG:
|
@RogerWeihrauch, thank you! If I remember correctly, I also tried to set lower values for the image dimensions without much success. |
Describe the bug
Hello, 👋
When trying out ComfyUI's tutorial with the Stable Diffusion model, I get the error
I found GitHub issues related to the problem of allocating more than 4Gb (e.g., #325). So, I also tried setting
but this has not changed anything.
Without setting that environment variable I had out-of-memory issues for the script below. When I set it, the out-of-memory issue is gone. So, I assume the initial problem and the reported
UR error
are not related to this.Also, monitoring the GPU did not reveal anything special: I can see that the Python process uses it before the error appears, but that's it.
I have searched for similar issues and could not find anything specific except for #325. But, as said above, it does not seem to be related to the memory allocation issue.
Used setup
3.12
v0.0.2-736-g8d88bfaf
After cloning the ComfyUI repository:
How to reproduce
Logs
Versions
The text was updated successfully, but these errors were encountered: