Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo
Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS). To decouple the NSS from the MBS, a dynam...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
30-03-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Contrastive learning (CL) is widely known to require many negative samples,
65536 in MoCo for instance, for which the performance of a dictionary-free
framework is often inferior because the negative sample size (NSS) is limited
by its mini-batch size (MBS). To decouple the NSS from the MBS, a dynamic
dictionary has been adopted in a large volume of CL frameworks, among which
arguably the most popular one is MoCo family. In essence, MoCo adopts a
momentum-based queue dictionary, for which we perform a fine-grained analysis
of its size and consistency. We point out that InfoNCE loss used in MoCo
implicitly attract anchors to their corresponding positive sample with various
strength of penalties and identify such inter-anchor hardness-awareness
property as a major reason for the necessity of a large dictionary. Our
findings motivate us to simplify MoCo v2 via the removal of its dictionary as
well as momentum. Based on an InfoNCE with the proposed dual temperature, our
simplified frameworks, SimMoCo and SimCo, outperform MoCo v2 by a visible
margin. Moreover, our work bridges the gap between CL and non-CL frameworks,
contributing to a more unified understanding of these two mainstream frameworks
in SSL. Code is available at: https://bit.ly/3LkQbaT. |
---|---|
DOI: | 10.48550/arxiv.2203.17248 |