A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

[PDF][PDF] Deep unsupervised domain adaptation: A review of recent advances and perspectives

X Liu, C Yoo, F **ng, H Oh, G El Fakhri… - … on Signal and …, 2022 - nowpublishers.com
Deep learning has become the method of choice to tackle real-world problems in different
domains, partly because of its ability to learn from data and achieve impressive performance …

Test-time prompt tuning for zero-shot generalization in vision-language models

M Shu, W Nie, DA Huang, Z Yu… - Advances in …, 2022 - proceedings.neurips.cc
Pre-trained vision-language models (eg, CLIP) have shown promising zero-shot
generalization in many downstream tasks with properly designed text prompts. Instead of …

Continual test-time domain adaptation

Q Wang, O Fink, L Van Gool… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain
without using any source data. Existing works mainly consider the case where the target …

Domain adaptation for medical image analysis: a survey

H Guan, M Liu - IEEE Transactions on Biomedical Engineering, 2021 - ieeexplore.ieee.org
Machine learning techniques used in computer-aided medical image analysis usually suffer
from the domain shift problem caused by different distributions between source/reference …

Memo: Test time robustness via adaptation and augmentation

M Zhang, S Levine, C Finn - Advances in neural information …, 2022 - proceedings.neurips.cc
While deep neural networks can attain good accuracy on in-distribution test points, many
applications require robustness even in the face of unexpected perturbations in the input …

Fedbn: Federated learning on non-iid features via local batch normalization

X Li, M Jiang, X Zhang, M Kamp, Q Dou - arxiv preprint arxiv:2102.07623, 2021 - arxiv.org
The emerging paradigm of federated learning (FL) strives to enable collaborative training of
deep models on the network edge without centrally aggregating raw data and hence …

Tent: Fully test-time adaptation by entropy minimization

D Wang, E Shelhamer, S Liu, B Olshausen… - arxiv preprint arxiv …, 2020 - arxiv.org
A model must adapt itself to generalize to new and different data during testing. In this
setting of fully test-time adaptation the model has only the test data and its own parameters …

Improving robustness against common corruptions by covariate shift adaptation

S Schneider, E Rusak, L Eck… - Advances in neural …, 2020 - proceedings.neurips.cc
Today's state-of-the-art machine vision models are vulnerable to image corruptions like
blurring or compression artefacts, limiting their performance in many real-world applications …

Wilds: A benchmark of in-the-wild distribution shifts

PW Koh, S Sagawa, H Marklund… - International …, 2021 - proceedings.mlr.press
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …