Skip to content
This repository has been archived by the owner on May 28, 2021. It is now read-only.

CUDA out of memory, any tips? #41

Open
ghost opened this issue Feb 21, 2019 · 3 comments
Open

CUDA out of memory, any tips? #41

ghost opened this issue Feb 21, 2019 · 3 comments

Comments

@ghost
Copy link

ghost commented Feb 21, 2019

I've gotten image recognition to work at multiple frames/second, using a GTX 1060 with 6GB of memory. Now I'm trying to train a custom classifier but I keep running out of memory.
Running on the darknet implementation, I can train using the yolov3-tiny.cfg file but not the yolov3.cfg file, which I guess is probably expected behavior given my hardware limitations. Now I'm trying to train with this implementation.

What parameters could I tweak in training/params.py to reduce my memory consumption?
Is there an equivalent param in this implementation for subdivisions in the darknet implementation?

@guagen
Copy link

guagen commented Mar 6, 2019

I think you should turn down batch_size.
You can set batch_size equal 1 at first and turn up it slowly.From experience,batch_size equals 6 maybe the best for GTX 1060 when img_h and img_h equals 416.

@AndrewZhuZJU
Copy link

In my experience, batch size should be set to be 16 if your GPU memory is 12G (GTX1080Ti).

@leonardozcm
Copy link

remove the parallels config in params.py and related code in main.py

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants