To compress or not to compress—self-supervised learning and information theory: A review
Deep neural networks excel in supervised learning tasks but are constrained by the need for
extensive labeled data. Self-supervised learning emerges as a promising alternative …
extensive labeled data. Self-supervised learning emerges as a promising alternative …
Vicregl: Self-supervised learning of local visual features
Most recent self-supervised methods for learning image representations focus on either
producing a global feature with invariance properties, or producing a set of local features …
producing a global feature with invariance properties, or producing a set of local features …
Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank
Abstract Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid
development, with the emergence of many method variations but only few principled …
development, with the emergence of many method variations but only few principled …
How does information bottleneck help deep learning?
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …
information bottleneck, where unnecessary information is (often implicitly) minimized while …
Audiovisual masked autoencoders
Can we leverage the audiovisual information already present in video to improve self-
supervised representation learning? To answer this question, we study various pretraining …
supervised representation learning? To answer this question, we study various pretraining …
Pushing the limits of self-supervised resnets: Can we outperform supervised learning without labels on imagenet?
Despite recent progress made by self-supervised methods in representation learning with
residual networks, they still underperform supervised learning on the ImageNet classification …
residual networks, they still underperform supervised learning on the ImageNet classification …
On the duality between contrastive and non-contrastive self-supervised learning
Recent approaches in self-supervised learning of image representations can be categorized
into different families of methods and, in particular, can be divided into contrastive and non …
into different families of methods and, in particular, can be divided into contrastive and non …
Mc-jepa: A joint-embedding predictive architecture for self-supervised learning of motion and content features
Self-supervised learning of visual representations has been focusing on learning content
features, which do not capture object motion or location, and focus on identifying and …
features, which do not capture object motion or location, and focus on identifying and …
Self-supervised learning via maximum entropy coding
A mainstream type of current self-supervised learning methods pursues a general-purpose
representation that can be well transferred to downstream tasks, typically by optimizing on a …
representation that can be well transferred to downstream tasks, typically by optimizing on a …
Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching
Obtaining large pre-trained models that can be fine-tuned to new tasks with limited
annotated samples has remained an open challenge for medical imaging data. While pre …
annotated samples has remained an open challenge for medical imaging data. While pre …