Contrastive representation learning: A framework and review

PH Le-Khac, G Healy, AF Smeaton - Ieee Access, 2020 - ieeexplore.ieee.org
Contrastive Learning has recently received interest due to its success in self-supervised
representation learning in the computer vision domain. However, the origins of Contrastive …

Contrastive self-supervised learning: review, progress, challenges and future research directions

P Kumar, P Rawat, S Chauhan - International Journal of Multimedia …, 2022 - Springer
In the last decade, deep supervised learning has had tremendous success. However, its
flaws, such as its dependency on manual and costly annotations on large datasets and …

Multimodal foundation models: From specialists to general-purpose assistants

C Li, Z Gan, Z Yang, J Yang, L Li… - … and Trends® in …, 2024 - nowpublishers.com
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …

Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining

Z Qi, R Dong, G Fan, Z Ge, X Zhang… - … on Machine Learning, 2023 - proceedings.mlr.press
Mainstream 3D representation learning approaches are built upon contrastive or generative
modeling pretext tasks, where great improvements in performance on various downstream …

Context autoencoder for self-supervised representation learning

X Chen, M Ding, X Wang, Y **n, S Mo, Y Wang… - International Journal of …, 2024 - Springer
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE),
for self-supervised representation pretraining. We pretrain an encoder by making predictions …

Emerging properties in self-supervised vision transformers

M Caron, H Touvron, I Misra, H Jégou… - Proceedings of the …, 2021 - openaccess.thecvf.com
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …

Barlow twins: Self-supervised learning via redundancy reduction

J Zbontar, L **g, I Misra, Y LeCun… - … on machine learning, 2021 - proceedings.mlr.press
Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large
computer vision benchmarks. A successful approach to SSL is to learn embeddings which …

Decoupled contrastive learning

CH Yeh, CY Hong, YC Hsu, TL Liu, Y Chen… - European conference on …, 2022 - Springer
Contrastive learning (CL) is one of the most successful paradigms for self-supervised
learning (SSL). In a principled way, it considers two augmented “views” of the same image …

Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods

R Balestriero, Y LeCun - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Self-Supervised Learning (SSL) surmises that inputs and pairwise positive
relationships are enough to learn meaningful representations. Although SSL has recently …

From canonical correlation analysis to self-supervised graph neural networks

H Zhang, Q Wu, J Yan, D Wipf… - Advances in Neural …, 2021 - proceedings.neurips.cc
We introduce a conceptually simple yet effective model for self-supervised representation
learning with graph data. It follows the previous methods that generate two views of an input …