Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

geting cuda issues when using linear interpolation a few times in a row #35

Closed
gsgoldma opened this issue Jan 30, 2023 · 5 comments
Closed
Labels
bug Something isn't working

Comments

@gsgoldma
Copy link

gsgoldma commented Jan 30, 2023

not sure if there' s a vram leak involving it, but I can do normal generations without prompt fusion syntax after that error still.

@ljleb
Copy link
Owner

ljleb commented Jan 30, 2023

can you share the prompt and settings you were using when the error occured? It may or may not be related to how we handle multiple interpolations in a single prompt.

Would appreciate if you could share the error message you are getting as well. I assume it is an out of memory error?

I need to know how many interpolations are in prompt and how they are arranged together to see if this isn't related to #20

@gsgoldma
Copy link
Author

funny thing is, sometimes if i wait a few minutes and try a regular prompt it will sometimes work again, and then i can go back to fusion prompting. it's a little bit unpredictable when it's going to happen and recover. been happening all morning. if i turn on a LoRa, and then disable it, it sometimes works again too. very strange.

[foggy night :desolate town:sherbert ice cream :, 1, 13] photorealistic, delicious
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3945741849, Size: 512x512, Model hash: 47236be899, Model: New_realgrapenew, ENSD: 31337, Score: 5.31
Template: [foggy night :desolate town:sherbert ice cream :, 1, 13] photorealistic, delicious

35%|█████████████████████████████ | 7/20 [00:07<00:14, 1.13s/it]
Error completing request:27, 11.63s/it]
Arguments: ('task(a6g6nxppcupf62d)', '[foggy night :desolate town:sherbert ice cream :, 1, 13] photorealistic, delicious', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, 0, 0, 0, 0, 0.25, False, True, False, 0, -1, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'x264', 'mci', 10, 0, False, True, True, True, 'intermediate', 'animation', True, False,, 0.65, 0.65, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'Refresh models', None, '', 'Get Tags', '', '', 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0, True, -1.0, False, '', True, False, 0, 384, 0, True, True, True, 1, '\n

Combinations

\n Choose a number of terms from a list, in this case we choose two artists\n {2$$artist1|artist2|artist3}\n If $$ is not provided, then 1$$ is assumed.\n

\n\n

Wildcards

\n

Available wildcards

\n
    \n
\n
\n WILDCARD_DIR: scripts/wildcards
\n You can add more wildcards by creating a text file with one term per line and name is mywildcards.txt. Place it in scripts/wildcards. mywildcards will then become available.\n ', None, '', 'outputs', 1, '', 0, '', True, False, False, False, False, False, 1, 1, False, False, '', 1, True, 100, '', '', 8, True, 16, 'Median cut', 8, True, True, 16, 'PNN', True, False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, '', 25, True, 5.0, False, False, '', '', '', 'Positive', 0, ', ', True, 32, 0, 'Median cut', 'luminance', False, 'svg', True, True, False, 0.5, 1, '', 0, '', 0, '', True, False, False, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', '', 'None', 30, 4, 0, 0, False, 'None', '
', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, None, True, '', 5, 24, 12.5, 1000, 'DDIM', 0, 64, 64, '', 64, 7.5, 0.42, 'DDIM', 64, 64, 1, 0, 92, True, True, True, False, False, False, 'midas_v21_small', False, True, False, True, True, [], False, '', True, False, 'D:\stable-diffusion-webui\extensions\sd-webui-riffusion\outputs', 'Refresh Inline Audio (Last Batch)', None, None, None, None, None, None, None, None, False, 4.0, '', 10.0, False, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 10.0, True, 30.0, True, '', False, False, False, False, 'Auto', 0.5, 1, 0, 0, 512, 512, False, False, True, True, True, False, True, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, False, '{inspiration}', None) {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "D:\stable-diffusion-webui\modules\processing.py", line 484, in process_images
res = process_images_inner(p)
File "D:\stable-diffusion-webui\modules\processing.py", line 626, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "D:\stable-diffusion-webui\modules\processing.py", line 826, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 290, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 190, in launch_sampling
return func()
File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 290, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 94, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 781, in forward
h = module(h, emb, context)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward
x = block(x, context=context[i])
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\extensions\Hypernetwork-MonkeyPatch-Extension\patches\external_pr\sd_hijack_checkpoint.py", line 5, in BasicTransformerBlock_forward
return checkpoint(self._forward, x, context)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 129, in split_cross_attention_forward
s2 = s1.softmax(dim=-1, dtype=q.dtype)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 6.00 GiB total capacity; 4.75 GiB already allocated; 0 bytes free; 5.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
message = await recv_stream.receive()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in call
await super().call(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in call
await self.middleware_stack(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in call
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 391, in app_encryption_middleware
res: StreamingResponse = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in call
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in call
await self.app(scope, receive, self.send_with_gzip)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\modules\progress.py", line 85, in progressapi
shared.state.set_current_image()
File "D:\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image
self.do_set_current_image()
File "D:\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image
self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
File "D:\stable-diffusion-webui\modules\sd_samplers_common.py", line 51, in samples_to_image_grid
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "D:\stable-diffusion-webui\modules\sd_samplers_common.py", line 51, in
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "D:\stable-diffusion-webui\modules\sd_samplers_common.py", line 38, in single_sample_to_image
x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
File "D:\stable-diffusion-webui\modules\processing.py", line 422, in decode_first_stage
x = model.decode_first_stage(x)
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
return self.first_stage_model.decode(z)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
dec = self.decoder(z)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 641, in forward
h = self.up[i_level].upsample(h)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 64, in forward
x = self.conv(x)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 6.00 GiB total capacity; 4.75 GiB already allocated; 0 bytes free; 5.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@ljleb ljleb added the bug Something isn't working label Jan 30, 2023
@ljleb
Copy link
Owner

ljleb commented Jan 30, 2023

Thanks for the details. I'll try to dive into this when I get some free time again during the week.

@gsgoldma
Copy link
Author

gsgoldma commented Jan 30, 2023

Thanks for the details. I'll try to dive into this when I get some free time again during the week.<

and it's not even that big of a deal, in fact, without restarting the program, it's still working fine after that error. I had to wait several minutes and switch prompts/turn off and loras. not sure which step, or just waiting for vram to be allocated back or something made it work again.

@ljleb
Copy link
Owner

ljleb commented Jan 31, 2023

As the issue has been solved on your side, I'll close this for now. I'll reopen and continue to investigate if someone else runs into a similar situation.

I don't really understand what could be causing this, except maybe for a weird interaction between different extensions (including this one). I couldn't reproduce it on my side so far.

I'm leaving a note to my future self here in case I need to investigate further: I think the next step would be to install all of gsgoldma's extensions on AUTOMATIC1111/stable-diffusion-webui@2c1bb46 on my side to have any chance of reproducing. Ideally we should reduce the number of extensions to a minimum to make it easier to understand and fix the issue.

extension list:
ABG_extension/
Config-Presets/
DreamArtist-sd-webui-extension/
Hypernetwork-MonkeyPatch-Extension/
PromptGallery-stable-diffusion-webui/
SD-latent-mirroring/
StylePile/
a1111-sd-webui-haku-img/
a1111-sd-webui-tagcomplete/
animator_extension/
asymmetric-tiling-sd-webui/
auto-sd-paint-ext/
batch-face-swap/
booru2prompt/
custom-diffusion-webui/
ddetailer/
deforum-for-automatic1111-webui/
depth-image-io-for-SDWebui/
depthmap2mask/
embedding-inspector/
enhanced-img2img/
extensions.py
model-keyword/
multi-subject-render/
novelai-2-local-prompt/
openOutpaint-webUI-extension/
prompt-fusion-extension/
prompt_gallery_name.json
'put extensions here.txt'
sd-dynamic-prompts/
sd-extension-aesthetic-scorer/
sd-extension-steps-animation/
sd-extension-system-info/
sd-infinity-grid-generator-script/
sd-web-ui-quickcss/
sd-webui-additional-networks/
sd-webui-gelbooru-prompt/
sd-webui-model-converter/
sd-webui-multiple-hypernetworks/
sd-webui-riffusion/
sd_dreambooth_extension/
sd_grid_add_image_number/
sd_web_ui_preset_utils/
sdweb-merge-block-weighted-gui/
sdweb-merge-board/
seed_travel/
shift-attention/
stable-diffusion-webui-Prompt_Generator/
stable-diffusion-webui-aesthetic-gradients/
stable-diffusion-webui-artists-to-study/
stable-diffusion-webui-auto-tls-https/
stable-diffusion-webui-cafe-aesthetic/
stable-diffusion-webui-conditioning-highres-fix/
stable-diffusion-webui-daam/
stable-diffusion-webui-dataset-tag-editor/
stable-diffusion-webui-depthmap-script/
stable-diffusion-webui-embedding-editor/
stable-diffusion-webui-images-browser/
stable-diffusion-webui-inspiration/
stable-diffusion-webui-instruct-pix2pix/
stable-diffusion-webui-pixelization/
stable-diffusion-webui-promptgen/
stable-diffusion-webui-tokenizer/
stable-diffusion-webui-visualize-cross-attention-extension/
stable-diffusion-webui-wd14-tagger/
stable-diffusion-webui-wildcards/
ultimate-upscale-for-automatic1111/
unprompted/

@ljleb ljleb closed this as completed Jan 31, 2023
@ljleb ljleb closed this as not planned Won't fix, can't repro, duplicate, stale Jan 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants