Skip to content

OSError: [WinError 126] 找不到指定的模块。 Error loading "E:\programs\python\Python312\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies. #804

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
durongze opened this issue Apr 6, 2025 · 8 comments
Assignees

Comments

@durongze
Copy link

durongze commented Apr 6, 2025

Describe the bug

OSError: [WinError 126] 找不到指定的模块。 Error loading "E:\programs\python\Python312\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies.

Versions

Traceback (most recent call last):
File "E:\programs\python\Python312\Lib\site-packages\torch_init_.py", line 2756, in import_device_backends
entrypoint = backend_extension.load()
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\programs\python\Python312\Lib\importlib\metadata_init
.py", line 205, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\programs\python\Python312\Lib\importlib_init_.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 999, in exec_module
File "", line 488, in call_with_frames_removed
File "E:\programs\python\Python312\Lib\site-packages\intel_extension_for_pytorch_init
.py", line 124, in
raise err
OSError: [WinError 126] 找不到指定的模块。 Error loading "E:\programs\python\Python312\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "", line 1, in
File "E:\programs\python\Python312\Lib\site-packages\torch_init_.py", line 2784, in
import_device_backends()
File "E:\programs\python\Python312\Lib\site-packages\torch_init
.py", line 2760, in _import_device_backends
raise RuntimeError(
RuntimeError: Failed to load the backend extension: intel_extension_for_pytorch. You can disable extension auto-loading with TORCH_DEVICE_BACKEND_AUTOLOAD=0.

@feng-intel feng-intel self-assigned this Apr 7, 2025
@feng-intel
Copy link

Could you provide your code and steps how did you have this issue?

@rkilchmn
Copy link

rkilchmn commented Apr 7, 2025

I have same with ipex-llm[xpu_2.6], it happens when transformer is imported.
Have Intel iGPU Gen11 on Win11

#!/usr/bin/env python3
import os
import sys
import argparse
HERE: from transformers import pipeline

my requirements.txt:
transformers
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
ipex-llm[xpu_2.6]
torch
intel-extension-for-pytorch

There errorL

.conda\python.exe whisper.py -m distil-whisper/distil-large-v3 -q 4-bit -d xpu -P 8081
Traceback (most recent call last):
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\torch_init_.py", line 2756, in import_device_backends
entrypoint = backend_extension.load()
^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\importlib\metadata_init
.py", line 205, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\importlib_init_.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 999, in exec_module
File "", line 488, in call_with_frames_removed
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\intel_extension_for_pytorch_init
.py", line 124, in
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "c:\Users\Documents\project\openedai-whisper-ipex-llm\whisper.py", line 6, in
from transformers import pipeline
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers_init_.py", line 26, in
from . import dependency_versions_check
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\dependency_versions_check.py", line 16, in
from .utils.versions import require_version, require_version_core
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\utils_init_.py", line 25, in
from .chat_template_utils import DocstringParsingException, TypeHintParsingException, get_json_schema
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\utils\chat_template_utils.py", line 40, in
from torch import Tensor
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\torch_init_.py", line 2784, in
import_device_backends()
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\torch_init
.py", line 2760, in _import_device_backends
raise RuntimeError(
RuntimeError: Failed to load the backend extension: intel_extension_for_pytorch. You can disable extension auto-loading with TORCH_DEVICE_BACKEND_AUTOLOAD=0

@ZailiWang
Copy link
Contributor

Would you try with the suggestion here?

@rkilchmn
Copy link

rkilchmn commented Apr 7, 2025

I tried: TORCH_DEVICE_BACKEND_AUTOLOAD=0 and got a bit further (some dependencies were missing).
But after that now again I ended up here. So it is same error, but a bit later in the execution

(c:\Users\Documents\project\openedai-whisper-ipex-llm.conda) c:\Users\Documents\project\openedai-whisper-ipex-llm>.conda\python.exe whisper.py -m distil-whisper/distil-large-v3 -q 4-bit -d xpu -P 8081
Traceback (most recent call last):
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\utils\import_utils.py", line 1967, in get_module
return importlib.import_module("." + module_name, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\importlib_init
.py", line 90, in import_module
return bootstrap.gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in gcd_import
File "", line 1360, in find_and_load
File "", line 1331, in find_and_load_unlocked
File "", line 935, in load_unlocked
File "", line 999, in exec_module
File "", line 488, in call_with_frames_removed
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\pipelines_init
.py", line 49, in
from .audio_classification import AudioClassificationPipeline
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\pipelines\audio_classification.py", line 21, in
from .base import Pipeline, build_pipeline_init_args
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\pipelines\base.py", line 69, in
from ..modeling_utils import PreTrainedModel
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\modeling_utils.py", line 158, in
import deepspeed
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed_init
.py", line 25, in
from . import ops
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\ops_init.py", line 6, in
from . import adam
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\ops\adam_init.py", line 6, in
from .cpu_adam import DeepSpeedCPUAdam
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\ops\adam\cpu_adam.py", line 8, in
from deepspeed.utils import logger
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\utils_init.py", line 10, in
from .groups import *
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\utils\groups.py", line 28, in
from deepspeed import comm as dist
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\comm_init.py", line 7, in
from .comm import *
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\comm\comm.py", line 31, in
from deepspeed.comm.ccl import CCLBackend
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\comm\ccl.py", line 11, in
from deepspeed.ops.op_builder import NotImplementedBuilder
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\ops\op_builder_init.py", line 53, in
this_module.dict[member_name] = builder_closure(member_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\ops\op_builder_init_.py", line 41, in builder_closure
builder = get_accelerator().get_op_builder(member_name)
^^^^^^^^^^^^^^^^^
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\deepspeed\accelerator\real_accelerator.py", line 133, in get_accelerator
import intel_extension_for_pytorch as ipex
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\intel_extension_for_pytorch_init_.py", line 124, in
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "c:\Users\Documents\project\openedai-whisper-ipex-llm\whisper.py", line 12, in
from transformers import pipeline
File "", line 1412, in _handle_fromlist
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\utils\import_utils.py", line 1955, in getattr
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\transformers\utils\import_utils.py", line 1969, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
[WinError 126] The specified module could not be found. Error loading "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies.

@ZailiWang
Copy link
Contributor

would you try with conda install libuv ?

@durongze
Copy link
Author

durongze commented Apr 8, 2025

我明天试一下,感谢回复。

@rkilchmn
Copy link

rkilchmn commented Apr 9, 2025

reinstalled everything with "conda install libuv" - still same error
File "c:\Users\Documents\project\openedai-whisper-ipex-llm.conda\Lib\site-packages\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" is in dir:

Mode LastWriteTime Length Name


-a---- 9/04/2025 8:34 AM 3006976 esimd_kernels.dll
-a---- 9/04/2025 8:34 AM 264237568 intel-ext-pt-gpu.dll
-a---- 9/04/2025 8:34 AM 460288 intel-ext-pt-python.dll
-a---- 9/04/2025 8:34 AM 1920765440 xetla_gemm.dll
-a---- 9/04/2025 8:34 AM 17577472 xetla_XeHpc_qmode0_bf16_1_1_1_128_1_1_0_0.dll
-a---- 9/04/2025 8:34 AM 18438144 xetla_XeHpc_qmode0_bf16_1_1_1_256_1_1_0_0.dll
-a---- 9/04/2025 8:34 AM 20814848 xetla_XeHpc_qmode0_bf16_1_1_1_512_1_1_0_0.dll
-a---- 9/04/2025 8:34 AM 16517120 xetla_XeHpc_qmode0_fp16_1_1_1_128_1_1_0_0.dll
(continues)

@Yu-ppy
Copy link

Yu-ppy commented Apr 16, 2025

I also encountered your problem. The reason is that you installed torch+cpu in your environment. Uninstall the original torch and reinstall torch+xpu according to the document command.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants