You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
What happened?
BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'} While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) Original traceback: File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward hidden_states = self.norm1(hidden_states) Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True
Time taken: 6.5 sec.
Steps to reproduce the problem
Go to ....
Press ....
...
What should have happened?
Pictures can be produced normally
Sysinfo
venv "D:\system\Documents\SD\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 4400629
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from D:\system\Documents\SD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Creating model from config: D:\system\Documents\SD\stable-diffusion-webui\configs\v1-inference.yaml
Startup time: 10.6s (prepare environment: 0.4s, import torch: 3.6s, import gradio: 1.1s, setup paths: 1.0s, initialize shared: 0.5s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.4s, create ui: 0.4s, gradio launch: 0.4s).
Applying attention optimization: InvokeAI... done.
Model loaded in 3.5s (load weights from disk: 0.6s, create model: 0.3s, apply weights to model: 2.4s, calculate empty prompt: 0.1s).
{}
Loading weights [6ce0161689] from D:\system\Documents\SD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
OpenVINO Script: created model from config : D:\system\Documents\SD\stable-diffusion-webui\configs\v1-inference.yaml
0%| | 0/20 [00:00<?, ?it/s][2023-12-09 18:52:21,913] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,260] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,295] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,326] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,504] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,562] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,601] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,821] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,858] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x000001EF7F8DD900> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:23,059] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:23,115] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx
compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_model
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
result = super().run_node(n)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
return submod(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
return originals.GroupNorm_forward(self, input)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
result = mode.torch_function(public_api, types, args, kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch_inductor\overrides.py", line 38, in torch_function
return func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
0%| | 0/20 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(av52dfc5c5rmdah)', 'tree', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001EF3F382B00>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx
compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_model
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
result = super().run_node(n)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
return submod(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
return originals.GroupNorm_forward(self, input)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 107, in wrapper
return fn(model, inputs, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 233, in openvino_fx
return compile_fx(subgraph, example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 415, in compile_fx
model_ = overrides.fuse_fx(model_, example_inputs_)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 96, in fuse_fx
gm = mkldnn_fuse_fx(gm, example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\mkldnn.py", line 509, in mkldnn_fuse_fx
ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 185, in propagate
return super().run(*args)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 136, in run
self.env[node] = self.run_node(node)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 152, in run_node
raise RuntimeError(
RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\system\\Documents\\SD\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'}
While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
Original traceback:
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
hidden_states = self.norm1(hidden_states)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\system\Documents\SD\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "D:\system\Documents\SD\stable-diffusion-webui\modules\scripts.py", line 601, in run
processed = script.run(p, *script_args)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1228, in run
processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 979, in process_images_openvino
output = shared.sd_diffusers_model(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 840, in __call__
noise_pred = self.unet(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 924, in forward
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1066, in <graph break in forward>
sample, res_samples = downsample_block(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1159, in forward
hidden_states = resnet(hidden_states, temb, scale=lora_scale)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 311, in transform
tracer.run()
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1726, in run
super().run()
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 576, in run
and self.step()
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 372, in wrapper
self.output.compile_subgraph(self, reason=reason)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 541, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 588, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 675, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\system\\Documents\\SD\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'}
While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
Original traceback:
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
hidden_states = self.norm1(hidden_states)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
What browsers do you use to access the UI ?
Google Chrome
Console logs
venv "D:\system\Documents\SD\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 44006297e03a07f28505d54d6ba5fd55e0c1292d
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from D:\system\Documents\SD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.Creating model from config: D:\system\Documents\SD\stable-diffusion-webui\configs\v1-inference.yamlStartup time: 10.6s (prepare environment: 0.4s, import torch: 3.6s, import gradio: 1.1s, setup paths: 1.0s, initialize shared: 0.5s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.4s, create ui: 0.4s, gradio launch: 0.4s).Applying attention optimization: InvokeAI... done.Model loaded in 3.5s (load weights from disk: 0.6s, create model: 0.3s, apply weights to model: 2.4s, calculate empty prompt: 0.1s).{}Loading weights [6ce0161689] from D:\system\Documents\SD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensorsOpenVINO Script: created model from config : D:\system\Documents\SD\stable-diffusion-webui\configs\v1-inference.yaml 0%|| 0/20 [00:00<?, ?it/s][2023-12-09 18:52:21,913] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,260] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,295] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,326] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,504] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,562] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,601] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,821] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:22,858] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x000001EF7F8DD900> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:23,059] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments[2023-12-09 18:52:23,115] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional argumentslist index out of rangeTraceback (most recent call last): File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs) File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_modelom.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])IndexError: list index out of rangeDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node result =super().run_node(n) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_nodereturn getattr(self, n.op)(n.target, args, kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_modulereturn submod(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forwardreturn originals.GroupNorm_forward(self, input) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forwardreturn F.group_norm( File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_normreturn handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function result = mode.__torch_function__(public_api, types, args, kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__return func(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_normreturn torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280] 0%|| 0/20 [00:01<?, ?it/s]*** Error completing request*** Arguments: ('task(av52dfc5c5rmdah)', 'tree', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001EF3F382B00>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {} Traceback (most recent call last): File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs) File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_modelom.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype]) IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node result = super().run_node(n) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_nodereturn getattr(self, n.op)(n.target, args, kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_modulereturn submod(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forwardreturn originals.GroupNorm_forward(self, input) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forwardreturn F.group_norm( File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_normreturn handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function result = mode.__torch_function__(public_api, types, args, kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__return func(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_normreturn torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280] The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 670, in call_user_compiler compiled_fn = compiler_fn(gm, self.fake_example_inputs()) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\debug_utils.py", line 1055, in debug_wrapper compiled_gm = compiler_fn(gm, example_inputs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 107, in wrapperreturn fn(model, inputs, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 233, in openvino_fxreturn compile_fx(subgraph, example_inputs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 415, in compile_fx model_ = overrides.fuse_fx(model_, example_inputs_) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 96, in fuse_fx gm = mkldnn_fuse_fx(gm, example_inputs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\mkldnn.py", line 509, in mkldnn_fuse_fx ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 185, in propagatereturnsuper().run(*args) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 136, in run self.env[node] = self.run_node(node) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 152, in run_node raise RuntimeError( RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\system\\Documents\\SD\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'} While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) Original traceback: File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward hidden_states = self.norm1(hidden_states) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\system\Documents\SD\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\system\Documents\SD\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img processed = modules.scripts.scripts_txt2img.run(p, *args) File "D:\system\Documents\SD\stable-diffusion-webui\modules\scripts.py", line 601, in run processed = script.run(p, *script_args) File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1228, in run processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac) File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 979, in process_images_openvino output = shared.sd_diffusers_model( File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_contextreturn func(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 840, in __call__ noise_pred = self.unet( File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 82, in forwardreturn self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 209, in _fnreturn fn(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 924, in forward File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1066, in<graph breakin forward> sample, res_samples = downsample_block( File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1159, in forward hidden_states = resnet(hidden_states, temb, scale=lora_scale) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 337, in catch_errorsreturn callback(frame, cache_size, hooks) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 404, in _convert_frame result = inner_convert(frame, cache_size, hooks) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 104, in _fnreturn fn(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 262, in _convert_frame_assertreturn _compile( File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 324, in _compile out_code = transform_code_object(code, transform) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 445, in transform_code_object transformations(instructions, code_options) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 311, in transformtracer.run() File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1726, in runsuper().run() File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 576, in run and self.step() File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 540, in step getattr(self, inst.opname)(inst) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 372, in wrapper self.output.compile_subgraph(self, reason=reason) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 541, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 588, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 675, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\system\\Documents\\SD\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'} While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) Original traceback: File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward hidden_states = self.norm1(hidden_states) Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True---
Additional information
No response
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
What happened?
BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'} While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) Original traceback: File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward hidden_states = self.norm1(hidden_states) Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True
Time taken: 6.5 sec.
Steps to reproduce the problem
What should have happened?
Pictures can be produced normally
Sysinfo
venv "D:\system\Documents\SD\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 4400629
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from D:\system\Documents\SD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Creating model from config: D:\system\Documents\SD\stable-diffusion-webui\configs\v1-inference.yaml
Startup time: 10.6s (prepare environment: 0.4s, import torch: 3.6s, import gradio: 1.1s, setup paths: 1.0s, initialize shared: 0.5s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.4s, create ui: 0.4s, gradio launch: 0.4s).
Applying attention optimization: InvokeAI... done.
Model loaded in 3.5s (load weights from disk: 0.6s, create model: 0.3s, apply weights to model: 2.4s, calculate empty prompt: 0.1s).
{}
Loading weights [6ce0161689] from D:\system\Documents\SD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
OpenVINO Script: created model from config : D:\system\Documents\SD\stable-diffusion-webui\configs\v1-inference.yaml
0%| | 0/20 [00:00<?, ?it/s][2023-12-09 18:52:21,913] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,260] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,295] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,326] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,504] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,562] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,601] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,821] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:22,858] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x000001EF7F8DD900> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:23,059] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-12-09 18:52:23,115] torch._dynamo.symbolic_convert: [WARNING] D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001EF7F8DC280> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx
compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_model
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
result = super().run_node(n)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
return submod(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
return originals.GroupNorm_forward(self, input)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
result = mode.torch_function(public_api, types, args, kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch_inductor\overrides.py", line 38, in torch_function
return func(*args, **kwargs)
File "D:\system\Documents\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
0%| | 0/20 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(av52dfc5c5rmdah)', 'tree', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001EF3F382B00>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx
compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
File "D:\system\Documents\SD\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_model
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range
What browsers do you use to access the UI ?
Google Chrome
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: