Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Can't change checkpoints (but only if I use GPU) #101

Open
1 task done
AndrewRainfall opened this issue Feb 21, 2024 · 2 comments
Open
1 task done

[Bug]: Can't change checkpoints (but only if I use GPU) #101

AndrewRainfall opened this issue Feb 21, 2024 · 2 comments

Comments

@AndrewRainfall
Copy link

AndrewRainfall commented Feb 21, 2024

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Recently I bought Intel GPU Arc A770 & installed OpenVINO SD.

The problem that it only uses one checkpoint & I can't change it. I assume it's default 1.5 that was pre-installed with OpenVINO SD.

I can select checkpoints from the dropdown menu in the top left, but regardless of my choice all images I generate look like they were generated with one checkpoint.

It only happens if I use "Accelerate with OpenVINO" & select my GPU.

If I generate images with CPU everything works as intended (but takes a lot of time).

Checkpoints were downloaded from Civitai before I installed OpenVINO SD, then I linked them with mklink.

All previous steps I made (if needed): #100

= Got the "ValueError: prompt_embeds and negative_prompt_embeds must have the same shape" during test, but it's already known problem - just ignore it in logs #95

Steps to reproduce the problem

  1. Enter a prompt, select "Accelerate with OpenVINO", select GPU;
  2. Change checkpoint & press generate;
  3. Generation will happen normally, but regardless of checkpoint choice images will look like they are generated with the same checkpoint.

What should have happened?

Different checkpoints should hugely affect the image generation (like it still happens if I use CPU)

Sysinfo

sysinfo-2024-02-21-22-15.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 44006297e03a07f28505d54d6ba5fd55e0c1292d
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [41b6846108] from C:\stable-diffusion-webui\models\Stable-diffusion\cyberrealistic_v41BackToBasics.safetensors
Running on local URL:  http://127.0.0.1:7860
Creating model from config: C:\stable-diffusion-webui\configs\v1-inference.yaml

To create a public link, set `share=True` in `launch()`.
Startup time: 20.3s (prepare environment: 0.8s, import torch: 8.5s, import gradio: 2.2s, setup paths: 2.8s, initialize shared: 0.2s, other imports: 1.8s, setup codeformer: 0.2s, list SD models: 0.1s, load scripts: 2.6s, create ui: 0.8s, gradio launch: 0.7s).
Applying attention optimization: InvokeAI... done.
Model loaded in 3.7s (load weights from disk: 1.0s, create model: 0.7s, apply weights to model: 1.2s, apply float(): 0.5s, calculate empty prompt: 0.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [13:55<00:00, 33.44s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 25/25 [13:47<00:00, 33.10s/it]
{}tal progress: 100%|██████████████████████████████████████████████████████████████████| 25/25 [13:47<00:00, 33.77s/it]
Loading weights [41b6846108] from C:\stable-diffusion-webui\models\Stable-diffusion\cyberrealistic_v41BackToBasics.safetensors
OpenVINO Script:  created model from config : C:\stable-diffusion-webui\configs\v1-inference.yaml
*** Error completing request
*** Arguments: ('task(d2lnrg8axmjjn1g)', 'a candid photo of Donald Trump riding on a white horse in the golf field, upper body, eye level, light smile, male suit with red tie, daylight, soft light', '(worst quality, low quality:1.4), bad composition, low contrast, underexposed, overexposed, beginner, amateur, bad anatomy, inaccurate eyes, ugly, extra limbs, disfigured, deformed, distorted face, watermark, signature, text, jpeg artifacts, scan artifacts, tiling, out of frame, body out of frame, cut off, nsfw, female, looking at viewer, sad, full body', [], 25, 'DPM++ 2M Karras', 1, 2, 5, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001C55E3D8D90>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'GPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "C:\stable-diffusion-webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "C:\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1228, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "C:\stable-diffusion-webui\scripts\openvino_accelerate.py", line 979, in process_images_openvino
        output = shared.sd_diffusers_model(
      File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 754, in __call__
        self.check_inputs(
      File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 530, in check_inputs
        raise ValueError(
    ValueError: `prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but got: `prompt_embeds` torch.Size([2, 77, 768]) != `negative_prompt_embeds` torch.Size([2, 154, 768]).

---
{}
Loading weights [41b6846108] from C:\stable-diffusion-webui\models\Stable-diffusion\cyberrealistic_v41BackToBasics.safetensors
OpenVINO Script:  created model from config : C:\stable-diffusion-webui\configs\v1-inference.yaml
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [01:37<00:00,  3.92s/it]
Reusing loaded model cyberrealistic_v41BackToBasics.safetensors [41b6846108] to load bluePencil_v10.safetensors
Calculating sha256 for C:\stable-diffusion-webui\models\Stable-diffusion\bluePencil_v10.safetensors: 3a105af1a6521509c6ffe8ee9bc953d58bb78b2f96d0bd6fe36ab636cd478c2a
Loading weights [3a105af1a6] from C:\stable-diffusion-webui\models\Stable-diffusion\bluePencil_v10.safetensors
Applying attention optimization: InvokeAI... done.
Weights loaded in 6.3s (calculate hash: 5.5s, load weights from disk: 0.3s, apply weights to model: 0.4s).
{}
Loading weights [3a105af1a6] from C:\stable-diffusion-webui\models\Stable-diffusion\bluePencil_v10.safetensors
OpenVINO Script:  created model from config : C:\stable-diffusion-webui\configs\v1-inference.yaml
OpenVINO Script:  loading vae from : C:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:41<00:00,  1.67s/it]
Reusing loaded model bluePencil_v10.safetensors [3a105af1a6] to load ghostmix_v12.safetensors [d7465e52e1]
Loading weights [d7465e52e1] from C:\stable-diffusion-webui\models\Stable-diffusion\ghostmix_v12.safetensors
Applying attention optimization: InvokeAI... done.
Weights loaded in 6.8s (load weights from disk: 0.4s, apply weights to model: 6.3s).
{}
Loading weights [d7465e52e1] from C:\stable-diffusion-webui\models\Stable-diffusion\ghostmix_v12.safetensors
OpenVINO Script:  created model from config : C:\stable-diffusion-webui\configs\v1-inference.yaml
OpenVINO Script:  loading vae from : C:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:31<00:00,  1.26s/it]

Additional information

Am I missing something? How to fix this?

@Truebob
Copy link

Truebob commented Feb 22, 2024

yea I don't know how to solve your problems, usually I am okay with switching models however when I sometimes run them it has to break especially when my memory is low. But usually restarting it and doing it again fixes it for me. Most of the time, it works normally when switching models.

@AndrewRainfall
Copy link
Author

AndrewRainfall commented Feb 22, 2024

I found a workaround:

If I generate at least 1 image with CPU, all subsequent generations with GPU will work as intended (even if I change checkpoints) until SD restart.

But this ads extra 15 minutes to workflow, which is a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants