Skip to content

Commit 2018bf4

Browse files
Update GETTING_STARTED.md
1 parent c5e4ba6 commit 2018bf4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

GETTING_STARTED.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ For 4-node (32-GPUs) AMP-based training, run:
6161
(node3)$ ./tools/train_net.py --config-file configs/Panoptic/odise_label_coco_50e.py --machine-rank 3 --num-machines 4 --dist-url tcp://${MASTER_ADDR}:29500 --num-gpus 8 --amp
6262
```
6363

64-
Not that our default training configurations are designed for 32 GPUs.
64+
Note that our default training configurations are designed for 32 GPUs.
6565
Since we use the AdamW optimizer, it is not clear as to how to scale the learning rate with batch size.
6666
However, we provide the ability to automatically scale the learning rate and the batch size for any number of GPUs used for training by passing in the`--ref $REFERENCE_WORLD_SIZE` argument.
6767
For example, if you set `$REFERENCE_WORLD_SIZE=32` while training on 8 GPUs, the batch size and learning rate will be set to 8/32 = 0.25 of the original ones.

0 commit comments

Comments
 (0)