On the stepwise nature of self-supervised learning

JB Simon, M Knutins, L Ziyin, D Geisz… - International …, 2023 - proceedings.mlr.press
We present a simple picture of the training process of self-supervised learning methods with
dual deep networks. In our picture, these methods learn their high-dimensional embeddings …

Complementary benefits of contrastive learning and self-training under distribution shift

S Garg, A Setlur, Z Lipton… - Advances in …, 2023 - proceedings.neurips.cc
Self-training and contrastive learning have emerged as leading techniques for incorporating
unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it …

Why the simplest explanation isn't always the best

EL Dyer, K Kording - … of the National Academy of Sciences, 2023 - National Acad Sciences
As datasets in neuroscience increase in size and complexity, interpreting these high-
dimensional data is becoming more critical. However, develo** an intuition for patterns or …

Active self-supervised learning: A few low-cost relationships are all you need

V Cabannes, L Bottou, Y Lecun… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Self-Supervised Learning (SSL) has emerged as the solution of choice to learn
transferable representations from unlabeled data. However, SSL requires to build samples …

Self-supervised learning with lie symmetries for partial differential equations

G Mialon, Q Garrido, H Lawrence… - Advances in …, 2023 - proceedings.neurips.cc
Abstract Machine learning for differential equations paves the way for computationally
efficient alternatives to numerical solvers, with potentially broad impacts in science and …

Information flow in self-supervised learning

Z Tan, J Yang, W Huang, Y Yuan, Y Zhang - arxiv preprint arxiv …, 2023 - arxiv.org
In this paper, we provide a comprehensive toolbox for understanding and enhancing self-
supervised learning (SSL) methods through the lens of matrix information theory …

Towards open-world recognition: Critical problems and challenges

K Wang, Z Li, Y Chen, W Dong, J Chen - Engineering Applications of …, 2025 - Elsevier
With the emergence of rich classification models and high computing power, recognition
systems are widely used in various fields. Unfortunately, as the scale of open systems …

Disentangling Masked Autoencoders for Unsupervised Domain Generalization

A Zhang, H Wang, X Wang, TS Chua - European Conference on Computer …, 2024 - Springer
Abstract Domain Generalization (DG), designed to enhance out-of-distribution (OOD)
generalization, is all about learning invariance against domain shifts utilizing sufficient …

Matrix information theory for self-supervised learning

Y Zhang, Z Tan, J Yang, W Huang, Y Yuan - arxiv preprint arxiv …, 2023 - arxiv.org
The maximum entropy encoding framework provides a unified perspective for many non-
contrastive learning methods like SimSiam, Barlow Twins, and MEC. Inspired by this …

Memorization in self-supervised learning improves downstream generalization

W Wang, MA Kaleem, A Dziedzic, M Backes… - arxiv preprint arxiv …, 2024 - arxiv.org
Self-supervised learning (SSL) has recently received significant attention due to its ability to
train high-performance encoders purely on unlabeled data-often scraped from the internet …