Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: getting a lot of memory errors lately after just a few generations #7411

Open
1 task done
gsgoldma opened this issue Jan 31, 2023 · 10 comments
Open
1 task done
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@gsgoldma
Copy link

gsgoldma commented Jan 31, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

since SD came out until yesterday I was able to generate nearly infinite times on a single session, but after the last couple updates it can give me an out of memory error in as little as 4 to 5 generations.. sometimes if i wait a few minutes it will be good to go, but other times I have to restart the program.

could be an updated extension or the main program, not sure

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

not run out of memory

Commit where the problem happens

2c1bb46

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

No response

Command Line Arguments

--api --cors-allow-origins http://localhost:5173 --deepdanbooru --administrator --no-half-vae --no-half --disable-safe-unpickle --xformers

List of extensions

ABG_extension/
Config-Presets/
DreamArtist-sd-webui-extension/
Hypernetwork-MonkeyPatch-Extension/
PromptGallery-stable-diffusion-webui/
SD-latent-mirroring/
StylePile/
a1111-sd-webui-haku-img/
a1111-sd-webui-tagcomplete/
animator_extension/
asymmetric-tiling-sd-webui/
auto-sd-paint-ext/
batch-face-swap/
booru2prompt/
custom-diffusion-webui/
ddetailer/
deforum-for-automatic1111-webui/
depth-image-io-for-SDWebui/
depthmap2mask/
embedding-inspector/
enhanced-img2img/
extensions.py
model-keyword/
multi-subject-render/
novelai-2-local-prompt/
openOutpaint-webUI-extension/
prompt-fusion-extension/
prompt_gallery_name.json
'put extensions here.txt'
sd-dynamic-prompts/
sd-extension-aesthetic-scorer/
sd-extension-steps-animation/
sd-extension-system-info/
sd-infinity-grid-generator-script/
sd-web-ui-quickcss/
sd-webui-additional-networks/
sd-webui-gelbooru-prompt/
sd-webui-model-converter/
sd-webui-multiple-hypernetworks/
sd-webui-riffusion/
sd_dreambooth_extension/
sd_grid_add_image_number/
sd_web_ui_preset_utils/
sdweb-merge-block-weighted-gui/
sdweb-merge-board/
seed_travel/
shift-attention/
stable-diffusion-webui-Prompt_Generator/
stable-diffusion-webui-aesthetic-gradients/
stable-diffusion-webui-artists-to-study/
stable-diffusion-webui-auto-tls-https/
stable-diffusion-webui-cafe-aesthetic/
stable-diffusion-webui-conditioning-highres-fix/
stable-diffusion-webui-daam/
stable-diffusion-webui-dataset-tag-editor/
stable-diffusion-webui-depthmap-script/
stable-diffusion-webui-embedding-editor/
stable-diffusion-webui-images-browser/
stable-diffusion-webui-inspiration/
stable-diffusion-webui-instruct-pix2pix/
stable-diffusion-webui-pixelization/
stable-diffusion-webui-promptgen/
stable-diffusion-webui-tokenizer/
stable-diffusion-webui-visualize-cross-attention-extension/
stable-diffusion-webui-wd14-tagger/
stable-diffusion-webui-wildcards/
ultimate-upscale-for-automatic1111/
unprompted/

Console logs

100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:14<00:00,  1.39it/s]
Error completing request███████████████████████████████████████████████████████████████| 20/20 [00:13<00:00,  1.56it/s]
Arguments: ('task(ei4jtntmekaygim)', 'chemistry     Volumetric flask,\n    Erlenmeyer flasks,\npilling colorful bubbling chemicals\nboiling steam, calcinator, alembic', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, 0, 0, 0, 0, 0.25, False, True, False, 0, -1, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'x264', 'mci', 10, 0, False, True, True, True, 'intermediate', 'animation', False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'Refresh models', None, '', 'Get Tags', '', '', 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0, True, -1.0, False, '', True, False, 0, 384, 0, True, True, True, 1, '\n            <h3><strong>Combinations</strong></h3>\n            Choose a number of terms from a list, in this case we choose two artists\n            <code>{2$$artist1|artist2|artist3}</code>\n            If $$ is not provided, then 1$$ is assumed.\n            <br/><br/>\n\n            <h3><strong>Wildcards</strong></h3>\n            <p>Available wildcards</p>\n            <ul>\n        </ul>\n            <br/>\n            <code>WILDCARD_DIR: scripts/wildcards</code><br/>\n            <small>You can add more wildcards by creating a text file with one term per line and name is mywildcards.txt. Place it in scripts/wildcards. <code>__mywildcards__</code> will then become available.</small>\n        ', None, '', 'outputs', 1, '', 0, '', True, False, False, False, False, False, 1, 1, False, False, '', 1, True, 100, '', '', 8, True, 16, 'Median cut', 8, True, True, 16, 'PNN', True, False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, '', 25, True, 5.0, False, False, '', '', '', 'Positive', 0, ', ', True, 32, 0, 'Median cut', 'luminance', False, 'svg', True, True, False, 0.5, 1, '', 0, '', 0, '', True, False, False, False, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', '', 'None', 30, 4, 0, 0, False, 'None', '<br>', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, None, True, '', 5, 24, 12.5, 1000, 'DDIM', 0, 64, 64, '', 64, 7.5, 0.42, 'DDIM', 64, 64, 1, 0, 92, True, True, True, False, False, False, 'midas_v21_small', False, True, False, True, True, [], False, '', True, False, 'D:\\stable-diffusion-webui\\extensions\\sd-webui-riffusion\\outputs', 'Refresh Inline Audio (Last Batch)', None, None, None, None, None, None, None, None, False, 4.0, '', 10.0, False, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 10.0, True, 30.0, True, '', False, False, False, False, 'Auto', 0.5, 1, 0, 0, 512, 512, False, False, True, True, True, False, True, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, False, '{inspiration}', None) {}
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "D:\stable-diffusion-webui\modules\processing.py", line 484, in process_images
    res = process_images_inner(p)
  File "D:\stable-diffusion-webui\modules\processing.py", line 628, in process_images_inner
    x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to(dtype=devices.dtype_vae))[0].cpu() for i in range(samples_ddim.size(0))]
  File "D:\stable-diffusion-webui\modules\processing.py", line 628, in <listcomp>
    x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to(dtype=devices.dtype_vae))[0].cpu() for i in range(samples_ddim.size(0))]
  File "D:\stable-diffusion-webui\modules\processing.py", line 422, in decode_first_stage
    x = model.decode_first_stage(x)
  File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
    dec = self.decoder(z)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 637, in forward
    h = self.up[i_level].block[i_block](h, temb)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 132, in forward
    h = nonlinearity(h)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2059, in silu
    return torch._C._nn.silu(input)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 6.00 GiB total capacity; 4.88 GiB already allocated; 0 bytes free; 5.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

if i wait a few minutes sometimes it works again, like it's taking a while to give back the vram

No response

@gsgoldma gsgoldma added the bug-report Report of a bug, yet to be confirmed label Jan 31, 2023
@genewitch
Copy link

I'm getting the same after updating from SD 1.4 era webui to the latest. I used to be able to do batches of 4 images at up to 768x512 on any model, now i get CUDA errors sometimes when doing a single 512x512 image.

There used to be a settings option to "cache in VRAM" the models, among other things that could be saved in VRAM, i couldn't find it (i checked twice, but i still may have missed it).

I am trying to find the command line args / flags so i can try the reduced memory options, where are they?

@genewitch
Copy link

reinstalling xformers helped, but there was an issue with the scriptlet to install it, it was like xformers==xformers==0.0.16r(some numbers)

i changed it to xformers==0.0.16r(some numbers) and it installed. I also had to turn off the float32 upscale thing in the settings -> stable diffusion

sorry my window scrolled too far and i got booted off the VPN, so i can't copy and paste. hopefully someone else knows what i am talking about.

@ClashSAN
Copy link
Collaborator

ClashSAN commented Jan 31, 2023

@gsgoldma some extensions load their models and keep them in vram until the program is closed. disable the ones that do that when you need to, by unticking the box. If you do a fresh install, you will not have this issue.

There used to be a settings option to "cache in VRAM" the models, among other things that could be saved in VRAM

@genewitch i think you're after something else. that logically sounds like it'll put more of the model into vram.

when you run with --lowvram --xformers or --medvram --xformers your total pixel capacity increases, whether doing parallel processing (batch size) or just one super image.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Troubleshooting

@gsgoldma
Copy link
Author

i think resetting my venv and updating my graphics card driver might have helped, maybe just the venv wipe did. we'll see if the error comes back, but i'll leave it open for the other guy who has the issue still.

maybe try the venv wipe and reinstall and report back, ill close it if it solves your issue too.

@gsgoldma gsgoldma reopened this Jan 31, 2023
@ljleb
Copy link
Contributor

ljleb commented Jan 31, 2023

@gsgoldma I see you are using prompt fusion. Seeing this, I'm even more certain that it's an issue caused by the extension. see ljleb/prompt-fusion-extension#35, you may be in a similar situation. Does it stop throwing when you stop using interpolation syntax?

If interpolation syntax doesn't leak memory or something, then maybe the issue reported on our side isn't related to the extension after all?

I'm not sure I understand why this happens if it's caused by the extension. Not a single line in the error message seems to contain a file from fusion or related to it, so I'm really confused by this one...

Edit: I just realized you are literally the same person as in the issue lol. Can you clarify whether you figured that it wasn't related to the extension after all?

@gsgoldma
Copy link
Author

gsgoldma commented Jan 31, 2023

@lijeb It did not throw it as quickly when I didn't use it, but it always would eventually. I deleted my old venv and saw an updated nvidia game driver, but I'm not sure which action, if not both, has helped. My generation speeds were becoming abysmally slow, about 1 minute, even without --medvram on, and now they're at 10 -16 seconds, so something is fixed by what I did. There may still be a problem with prompt fusion but maybe it will take longer usage now for that to become apparent.

@HalBenHB
Copy link

I'm getting this error whenever I use upscaler. I can generate images from txt2img but if I select Hires. fix and change any pixel by 1, I got this error.

I also can't upscale from img2img, too.

(It's not just upscale, if a change a 512x512 image to 511x511 I'm again getting this error.)

@notdelicate
Copy link

I'm having this issue as well, I have installed it twice just to be sure it's not an install error. I can only generate up to 3-4 images until my desktop crashes and I'm forced to restart the session. I'm only generating 512x512 images, in the past I could generate hundreds of those without running into out of memory problems.

@RykeWollf
Copy link

RykeWollf commented Mar 6, 2023

Started having these today when using highres fix. Running on 8gb vram i never had these issues before, unsure if it's because I installed some extensions but doesn't seem to make any sense as I was just generating standard images without any additional strain.

[solved] on my part it was using --api and for some reason it stopped me from being able to generate with highres

@2blackbar
Copy link

this fixes it , v1 split attention
#8394

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

9 participants