With a little help from my friends: Nearest-neighbor contrastive learning of visual representations

D Dwibedi, Y Aytar, J Tompson… - Proceedings of the …, 2021 - openaccess.thecvf.com
Self-supervised learning algorithms based on instance discrimination train encoders to be
invariant to pre-defined transformations of the same instance. While most methods treat …

On feature decorrelation in self-supervised learning

T Hua, W Wang, Z Xue, S Ren… - Proceedings of the …, 2021 - openaccess.thecvf.com
In self-supervised representation learning, a common idea behind most of the state-of-the-
art approaches is to enforce the robustness of the representations to predefined …

Mine your own anatomy: Revisiting medical image segmentation with extremely limited labels

C You, W Dai, F Liu, Y Min, NC Dvornek… - … on Pattern Analysis …, 2024 - ieeexplore.ieee.org
Recent studies on contrastive learning have achieved remarkable performance solely by
leveraging few labels in medical image segmentation. Existing methods mainly focus on …

Sound source localization is all about cross-modal alignment

A Senocak, H Ryu, J Kim, TH Oh… - Proceedings of the …, 2023 - openaccess.thecvf.com
Humans can easily perceive the direction of sound sources in a visual scene, termed sound
source localization. Recent studies on learning-based sound source localization have …

Unsupervised object-level representation learning from scene images

J **e, X Zhan, Z Liu, YS Ong… - Advances in Neural …, 2021 - proceedings.neurips.cc
Contrastive self-supervised learning has largely narrowed the gap to supervised pre-training
on ImageNet. However, its success highly relies on the object-centric priors of ImageNet, ie …

A unified, scalable framework for neural population decoding

M Azabou, V Arora, V Ganesh, X Mao… - Advances in …, 2024 - proceedings.neurips.cc
Our ability to use deep learning approaches to decipher neural activity would likely benefit
from greater scale, in terms of both the model size and the datasets. However, the …

S-clip: Semi-supervised vision-language learning using few specialist captions

S Mo, M Kim, K Lee, J Shin - Advances in Neural …, 2023 - proceedings.neurips.cc
Vision-language models, such as contrastive language-image pre-training (CLIP), have
demonstrated impressive results in natural image domains. However, these models often …

Soft neighbors are positive supporters in contrastive visual representation learning

C Ge, J Wang, Z Tong, S Chen, Y Song… - arxiv preprint arxiv …, 2023 - arxiv.org
Contrastive learning methods train visual encoders by comparing views from one instance to
others. Typically, the views created from one instance are set as positive, while views from …

Understand and improve contrastive learning methods for visual representation: A review

R Liu - arxiv preprint arxiv:2106.03259, 2021 - arxiv.org
Traditional supervised learning methods are hitting a bottleneck because of their
dependency on expensive manually labeled data and their weaknesses such as limited …

Optimal positive generation via latent transformation for contrastive learning

Y Li, H Chang, B Ma, S Shan… - Advances in Neural …, 2022 - proceedings.neurips.cc
Contrastive learning, which learns to contrast positive with negative pairs of samples, has
been popular for self-supervised visual representation learning. Although great effort has …