Understanding contrastive learning requires incorporating inductive biases

N Saunshi, J Ash, S Goel, D Misra… - International …, 2022 - proceedings.mlr.press
Contrastive learning is a popular form of self-supervised learning that encourages
augmentations (views) of the same input to have more similar representations compared to …

Unsupervised representation learning for time series: A review

Q Meng, H Qian, Y Liu, Y Xu, Z Shen, L Cui - arxiv preprint arxiv …, 2023 - arxiv.org
Unsupervised representation learning approaches aim to learn discriminative feature
representations from unlabeled data, without the requirement of annotating every sample …

Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap

Y Wang, Q Zhang, Y Wang, J Yang, Z Lin - arxiv preprint arxiv …, 2022 - arxiv.org
Recently, contrastive learning has risen to be a promising approach for large-scale self-
supervised learning. However, theoretical understanding of how it works is still unclear. In …

Improving self-supervised learning by characterizing idealized representations

Y Dubois, S Ermon, TB Hashimoto… - Advances in Neural …, 2022 - proceedings.neurips.cc
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …

Does Negative Sampling Matter? A Review with Insights into its Theory and Applications

Z Yang, M Ding, T Huang, Y Cen, J Song… - … on Pattern Analysis …, 2024 - ieeexplore.ieee.org
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …

On the stepwise nature of self-supervised learning

JB Simon, M Knutins, L Ziyin, D Geisz… - International …, 2023 - proceedings.mlr.press
We present a simple picture of the training process of self-supervised learning methods with
dual deep networks. In our picture, these methods learn their high-dimensional embeddings …

Do more negative samples necessarily hurt in contrastive learning?

P Awasthi, N Dikkala, P Kamath - … conference on machine …, 2022 - proceedings.mlr.press
Recent investigations in noise contrastive estimation suggest, both empirically as well as
theoretically, that while having more “negative samples” in the contrastive loss improves …

Understanding multimodal contrastive learning and incorporating unpaired data

R Nakada, HI Gulluk, Z Deng, W Ji… - International …, 2023 - proceedings.mlr.press
Abstract Language-supervised vision models have recently attracted great attention in
computer vision. A common approach to build such models is to use contrastive learning on …

The mechanism of prediction head in non-contrastive self-supervised learning

Z Wen, Y Li - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
The surprising discovery of the BYOL method shows the negative samples can be replaced
by adding the prediction head to the network. It is mysterious why even when there exist …

Augmentations in graph contrastive learning: Current methodological flaws & towards better practices

P Trivedi, ES Lubana, Y Yan, Y Yang… - Proceedings of the ACM …, 2022 - dl.acm.org
Graph classification has a wide range of applications in bioinformatics, social sciences,
automated fake news detection, web document classification, and more. In many practical …