-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SimCLR performance #13
Comments
@jlindsey15, in terms of CIFAR-10, yes! You can check Fig. B.4. in the SimCLR paper. |
Ah I see, so there is no ImageNet support yet. Would you expect plugging in the ImageNet dataset to the current code to work? Thanks for the help! |
@jlindsey15, logically it should work. But may just take too much time to run on ImageNet. One note is that currently I am not using |
@HobbitLong Can I ask for the hyper-parameter settings for ImageNet training? |
@HobbitLong I think this code can not be applied to DistributedDataParallel directly. Because in DDP mode, pytorch compute loss in each gpu. For example, in 8-gpus mechain, if batch_size=1024, each gpu is assigned 32 samples. In this case, SimCLR(or SupContrast) will search postive pairs just in 32 candinates, it will be harmful to downstream performance. If use ddp mode, I think a gather_layer op need to be implemented like simclr. |
Does the performance of the SimCLR implementation in this repo match the published results?
The text was updated successfully, but these errors were encountered: