With a little help from my friends: Nearest-neighbor contrastive learning of visual representations
Self-supervised learning algorithms based on instance discrimination train encoders to be
invariant to pre-defined transformations of the same instance. While most methods treat …
invariant to pre-defined transformations of the same instance. While most methods treat …
On feature decorrelation in self-supervised learning
In self-supervised representation learning, a common idea behind most of the state-of-the-
art approaches is to enforce the robustness of the representations to predefined …
art approaches is to enforce the robustness of the representations to predefined …
Mine your own anatomy: Revisiting medical image segmentation with extremely limited labels
Recent studies on contrastive learning have achieved remarkable performance solely by
leveraging few labels in medical image segmentation. Existing methods mainly focus on …
leveraging few labels in medical image segmentation. Existing methods mainly focus on …
Sound source localization is all about cross-modal alignment
Humans can easily perceive the direction of sound sources in a visual scene, termed sound
source localization. Recent studies on learning-based sound source localization have …
source localization. Recent studies on learning-based sound source localization have …
Unsupervised object-level representation learning from scene images
Contrastive self-supervised learning has largely narrowed the gap to supervised pre-training
on ImageNet. However, its success highly relies on the object-centric priors of ImageNet, ie …
on ImageNet. However, its success highly relies on the object-centric priors of ImageNet, ie …
A unified, scalable framework for neural population decoding
Our ability to use deep learning approaches to decipher neural activity would likely benefit
from greater scale, in terms of both the model size and the datasets. However, the …
from greater scale, in terms of both the model size and the datasets. However, the …
S-clip: Semi-supervised vision-language learning using few specialist captions
Vision-language models, such as contrastive language-image pre-training (CLIP), have
demonstrated impressive results in natural image domains. However, these models often …
demonstrated impressive results in natural image domains. However, these models often …
Soft neighbors are positive supporters in contrastive visual representation learning
Contrastive learning methods train visual encoders by comparing views from one instance to
others. Typically, the views created from one instance are set as positive, while views from …
others. Typically, the views created from one instance are set as positive, while views from …
Understand and improve contrastive learning methods for visual representation: A review
R Liu - arxiv preprint arxiv:2106.03259, 2021 - arxiv.org
Traditional supervised learning methods are hitting a bottleneck because of their
dependency on expensive manually labeled data and their weaknesses such as limited …
dependency on expensive manually labeled data and their weaknesses such as limited …
Optimal positive generation via latent transformation for contrastive learning
Contrastive learning, which learns to contrast positive with negative pairs of samples, has
been popular for self-supervised visual representation learning. Although great effort has …
been popular for self-supervised visual representation learning. Although great effort has …