Название исследуемой задачи: | Анализ смещения распределения в задаче контрастного распределения. |
---|---|
Тип научной работы: | M1P |
Автор: | Лидия Сергеевна Троешестова |
Научный руководитель: | кандидат физико-математических наук, Исаченко Роман Владимирович |
Recently contrastive learning has regained popularity as a self-supervised representation learning technique. It involves comparing positive (similar) and negative (dissimilar) pairs of samples to learn representations without labels. However, false negative and false positive errors in sampling lead to the loss function bias. This paper analyzes various ways to eliminate these biases. Based on the fully-supervised case, we develop debiased contrastive models that account for same-label datapoints without requiring knowledge of true labels, and explore their properties. Using the debiased representations, we measure accuracy of predictions in the classification task. The experiments are carried out on the CIFAR10 dataset, demonstrating the applicability and robustness of the proposed method in scenarios where extensive labeling is expensive or not feasible.
- A python package DebiasedPos with all implementation here.
- A code with all experiment visualisation here. View in colab.
Inspired by chingyaoc/DCL.