The entropy enigma: Success and failure of entropy minimization

O Press, R Shwartz-Ziv, Y LeCun, M Bethge - arxiv preprint arxiv …, 2024 - arxiv.org
Entropy minimization (EM) is frequently used to increase the accuracy of classification
models when they're faced with new data at test time. EM is a self-supervised learning …

Complementary benefits of contrastive learning and self-training under distribution shift

S Garg, A Setlur, Z Lipton… - Advances in …, 2023 - proceedings.neurips.cc
Self-training and contrastive learning have emerged as leading techniques for incorporating
unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it …