You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry, we didn't try multiple GPUs before. But the performance should not downgrade using multiple GPUs. You may need to tune some parameters or monitor the loss during the training.
Thanks for your reply!
I have trained the houglass model(HG) with multi-gpus. When I test the checkpoints and set batch-size from 1 to 4, the AP and AR downgrade greatly. Will the accuracy decrease due to the BatchNormalization layer during multi-gpus training?
BTW, I use 4 gpu cards for training.
I double-checked the inference code. Seems the batch size is hardcoded as 1. We will fix this bug in the future. But for now, you can just use batch_size = 1 in the testing phase.
yizhou-wang
changed the title
Multi-GPUs train?batch_size>1 is not working in the testing phase
Feb 9, 2021
Hi,
Have you ever tried muti-gpus training? I simply add DataParallel but the AP and AR are lower than the training with single gpu.
Thanks!
The text was updated successfully, but these errors were encountered: