To compress or not to compress—self-supervised learning and information theory: A review

R Shwartz Ziv, Y LeCun - Entropy, 2024 - mdpi.com
Deep neural networks excel in supervised learning tasks but are constrained by the need for
extensive labeled data. Self-supervised learning emerges as a promising alternative …

Vicregl: Self-supervised learning of local visual features

A Bardes, J Ponce, Y LeCun - Advances in Neural …, 2022 - proceedings.neurips.cc
Most recent self-supervised methods for learning image representations focus on either
producing a global feature with invariance properties, or producing a set of local features …

Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank

Q Garrido, R Balestriero, L Najman… - … on machine learning, 2023 - proceedings.mlr.press
Abstract Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid
development, with the emergence of many method variations but only few principled …

How does information bottleneck help deep learning?

K Kawaguchi, Z Deng, X Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …

Audiovisual masked autoencoders

MI Georgescu, E Fonseca, RT Ionescu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Can we leverage the audiovisual information already present in video to improve self-
supervised representation learning? To answer this question, we study various pretraining …

Pushing the limits of self-supervised resnets: Can we outperform supervised learning without labels on imagenet?

N Tomasev, I Bica, B McWilliams, L Buesing… - arxiv preprint arxiv …, 2022 - arxiv.org
Despite recent progress made by self-supervised methods in representation learning with
residual networks, they still underperform supervised learning on the ImageNet classification …

On the duality between contrastive and non-contrastive self-supervised learning

Q Garrido, Y Chen, A Bardes, L Najman… - arxiv preprint arxiv …, 2022 - arxiv.org
Recent approaches in self-supervised learning of image representations can be categorized
into different families of methods and, in particular, can be divided into contrastive and non …

Mc-jepa: A joint-embedding predictive architecture for self-supervised learning of motion and content features

A Bardes, J Ponce, Y LeCun - arxiv preprint arxiv:2307.12698, 2023 - arxiv.org
Self-supervised learning of visual representations has been focusing on learning content
features, which do not capture object motion or location, and focus on identifying and …

Self-supervised learning via maximum entropy coding

X Liu, Z Wang, YL Li, S Wang - Advances in Neural …, 2022 - proceedings.neurips.cc
A mainstream type of current self-supervised learning methods pursues a general-purpose
representation that can be well transferred to downstream tasks, typically by optimizing on a …

Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching

D MH Nguyen, H Nguyen, N Diep… - Advances in …, 2024 - proceedings.neurips.cc
Obtaining large pre-trained models that can be fine-tuned to new tasks with limited
annotated samples has remained an open challenge for medical imaging data. While pre …