Understanding contrastive learning requires incorporating inductive biases
Contrastive learning is a popular form of self-supervised learning that encourages
augmentations (views) of the same input to have more similar representations compared to …
augmentations (views) of the same input to have more similar representations compared to …
Unsupervised representation learning for time series: A review
Unsupervised representation learning approaches aim to learn discriminative feature
representations from unlabeled data, without the requirement of annotating every sample …
representations from unlabeled data, without the requirement of annotating every sample …
Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap
Recently, contrastive learning has risen to be a promising approach for large-scale self-
supervised learning. However, theoretical understanding of how it works is still unclear. In …
supervised learning. However, theoretical understanding of how it works is still unclear. In …
Improving self-supervised learning by characterizing idealized representations
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …
what characteristics of their representations lead to high downstream accuracies. In this …
Does Negative Sampling Matter? A Review with Insights into its Theory and Applications
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …
ranging applications spanning machine learning, computer vision, natural language …
On the stepwise nature of self-supervised learning
We present a simple picture of the training process of self-supervised learning methods with
dual deep networks. In our picture, these methods learn their high-dimensional embeddings …
dual deep networks. In our picture, these methods learn their high-dimensional embeddings …
Do more negative samples necessarily hurt in contrastive learning?
Recent investigations in noise contrastive estimation suggest, both empirically as well as
theoretically, that while having more “negative samples” in the contrastive loss improves …
theoretically, that while having more “negative samples” in the contrastive loss improves …
Understanding multimodal contrastive learning and incorporating unpaired data
Abstract Language-supervised vision models have recently attracted great attention in
computer vision. A common approach to build such models is to use contrastive learning on …
computer vision. A common approach to build such models is to use contrastive learning on …
The mechanism of prediction head in non-contrastive self-supervised learning
The surprising discovery of the BYOL method shows the negative samples can be replaced
by adding the prediction head to the network. It is mysterious why even when there exist …
by adding the prediction head to the network. It is mysterious why even when there exist …
Augmentations in graph contrastive learning: Current methodological flaws & towards better practices
Graph classification has a wide range of applications in bioinformatics, social sciences,
automated fake news detection, web document classification, and more. In many practical …
automated fake news detection, web document classification, and more. In many practical …