Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fairness issues for comparision #27

Open
amsword opened this issue Jul 23, 2020 · 2 comments
Open

fairness issues for comparision #27

amsword opened this issue Jul 23, 2020 · 2 comments

Comments

@amsword
Copy link

amsword commented Jul 23, 2020

I find the proposed training strategy is 1) train the backbone with the labels and the contrastive loss, 2) finetune the last linear layer. The baseline approach is train the backbone and last linear layer at the same time with cross entropy loss. Do you have a reference of 1) train the backbone with cross entropy loss, 2) re-train the last linear layer from scratch?

The reason is that, the baseline here has multiple differences with the proposed solution. The gain could come from more iterations, i.e. iterations in pre-training + iterations in fine-tuning.

@HobbitLong
Copy link
Owner

HobbitLong commented Jul 23, 2020

@amsword, excellent question. But I don't think this is a fairness issue.

IIRC, the baseline you proposed here is sub-optimal compared to end-to-end cross-entropy training. In theory it also should be, but you can still run such a baseline to verify.

@Chen-Song
Copy link

If cross-entropy and SCL(supervised contrastive learning) are regarded as a multi-task task, which means that the network is in the form of a backbone and multiple branches, what will be the result? Could it be better? This also avoids finetuning the last linear layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants