-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
geting cuda issues when using linear interpolation a few times in a row #35
Comments
can you share the prompt and settings you were using when the error occured? It may or may not be related to how we handle multiple interpolations in a single prompt. Would appreciate if you could share the error message you are getting as well. I assume it is an out of memory error? I need to know how many interpolations are in prompt and how they are arranged together to see if this isn't related to #20 |
funny thing is, sometimes if i wait a few minutes and try a regular prompt it will sometimes work again, and then i can go back to fusion prompting. it's a little bit unpredictable when it's going to happen and recover. been happening all morning. if i turn on a LoRa, and then disable it, it sometimes works again too. very strange. [foggy night :desolate town:sherbert ice cream :, 1, 13] photorealistic, delicious 35%|█████████████████████████████ | 7/20 [00:07<00:14, 1.13s/it] Combinations\n Choose a number of terms from a list, in this case we choose two artists\n{2$$artist1|artist2|artist3} \n If $$ is not provided, then 1$$ is assumed.\n \n\n Wildcards\nAvailable wildcards \n
\n WILDCARD_DIR: scripts/wildcards \n You can add more wildcards by creating a text file with one term per line and name is mywildcards.txt. Place it in scripts/wildcards. mywildcards will then become available.\n ', None, '', 'outputs', 1, '', 0, '', True, False, False, False, False, False, 1, 1, False, False, '', 1, True, 100, '', '', 8, True, 16, 'Median cut', 8, True, True, 16, 'PNN', True, False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, '', 25, True, 5.0, False, False, '', '', '', 'Positive', 0, ', ', True, 32, 0, 'Median cut', 'luminance', False, 'svg', True, True, False, 0.5, 1, '', 0, '', 0, '', True, False, False, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', '', 'None', 30, 4, 0, 0, False, 'None', '', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, None, True, '', 5, 24, 12.5, 1000, 'DDIM', 0, 64, 64, '', 64, 7.5, 0.42, 'DDIM', 64, 64, 1, 0, 92, True, True, True, False, False, False, 'midas_v21_small', False, True, False, True, True, [], False, '', True, False, 'D:\stable-diffusion-webui\extensions\sd-webui-riffusion\outputs', 'Refresh Inline Audio (Last Batch)', None, None, None, None, None, None, None, None, False, 4.0, '', 10.0, False, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 10.0, True, 30.0, True, '', False, False, False, False, 'Auto', 0.5, 1, 0, 0, 512, 512, False, False, True, True, True, False, True, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, False, '{inspiration}', None) {} Traceback (most recent call last): File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img processed = process_images(p) File "D:\stable-diffusion-webui\modules\processing.py", line 484, in process_images res = process_images_inner(p) File "D:\stable-diffusion-webui\modules\processing.py", line 626, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "D:\stable-diffusion-webui\modules\processing.py", line 826, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 290, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 190, in launch_sampling return func() File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 290, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 94, in forward x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]}) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 781, in forward h = module(h, emb, context) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward x = block(x, context=context[i]) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\extensions\Hypernetwork-MonkeyPatch-Extension\patches\external_pr\sd_hijack_checkpoint.py", line 5, in BasicTransformerBlock_forward return checkpoint(self._forward, x, context) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward outputs = run_function(*args) File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 262, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 129, in split_cross_attention_forward s2 = s1.softmax(dim=-1, dtype=q.dtype) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 6.00 GiB total capacity; 4.75 GiB already allocated; 0 bytes free; 5.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ERROR: Exception in ASGI application During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Thanks for the details. I'll try to dive into this when I get some free time again during the week. |
and it's not even that big of a deal, in fact, without restarting the program, it's still working fine after that error. I had to wait several minutes and switch prompts/turn off and loras. not sure which step, or just waiting for vram to be allocated back or something made it work again. |
As the issue has been solved on your side, I'll close this for now. I'll reopen and continue to investigate if someone else runs into a similar situation. I don't really understand what could be causing this, except maybe for a weird interaction between different extensions (including this one). I couldn't reproduce it on my side so far. I'm leaving a note to my future self here in case I need to investigate further: I think the next step would be to install all of gsgoldma's extensions on AUTOMATIC1111/stable-diffusion-webui@2c1bb46 on my side to have any chance of reproducing. Ideally we should reduce the number of extensions to a minimum to make it easier to understand and fix the issue. extension list:
|
not sure if there' s a vram leak involving it, but I can do normal generations without prompt fusion syntax after that error still.
The text was updated successfully, but these errors were encountered: