Contrastive representation learning: A framework and review
Contrastive Learning has recently received interest due to its success in self-supervised
representation learning in the computer vision domain. However, the origins of Contrastive …
representation learning in the computer vision domain. However, the origins of Contrastive …
Contrastive self-supervised learning: review, progress, challenges and future research directions
In the last decade, deep supervised learning has had tremendous success. However, its
flaws, such as its dependency on manual and costly annotations on large datasets and …
flaws, such as its dependency on manual and costly annotations on large datasets and …
Multimodal foundation models: From specialists to general-purpose assistants
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …
methods to data compression. Recent advances in statistical machine learning have opened …
Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining
Mainstream 3D representation learning approaches are built upon contrastive or generative
modeling pretext tasks, where great improvements in performance on various downstream …
modeling pretext tasks, where great improvements in performance on various downstream …
Context autoencoder for self-supervised representation learning
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE),
for self-supervised representation pretraining. We pretrain an encoder by making predictions …
for self-supervised representation pretraining. We pretrain an encoder by making predictions …
Emerging properties in self-supervised vision transformers
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …
Barlow twins: Self-supervised learning via redundancy reduction
Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large
computer vision benchmarks. A successful approach to SSL is to learn embeddings which …
computer vision benchmarks. A successful approach to SSL is to learn embeddings which …
Decoupled contrastive learning
Contrastive learning (CL) is one of the most successful paradigms for self-supervised
learning (SSL). In a principled way, it considers two augmented “views” of the same image …
learning (SSL). In a principled way, it considers two augmented “views” of the same image …
Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods
R Balestriero, Y LeCun - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Self-Supervised Learning (SSL) surmises that inputs and pairwise positive
relationships are enough to learn meaningful representations. Although SSL has recently …
relationships are enough to learn meaningful representations. Although SSL has recently …
From canonical correlation analysis to self-supervised graph neural networks
We introduce a conceptually simple yet effective model for self-supervised representation
learning with graph data. It follows the previous methods that generate two views of an input …
learning with graph data. It follows the previous methods that generate two views of an input …