A comprehensive survey on test-time adaptation under distribution shifts
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …
process that can effectively generalize to test samples, even in the presence of distribution …
[PDF][PDF] Deep unsupervised domain adaptation: A review of recent advances and perspectives
Deep learning has become the method of choice to tackle real-world problems in different
domains, partly because of its ability to learn from data and achieve impressive performance …
domains, partly because of its ability to learn from data and achieve impressive performance …
Test-time prompt tuning for zero-shot generalization in vision-language models
Pre-trained vision-language models (eg, CLIP) have shown promising zero-shot
generalization in many downstream tasks with properly designed text prompts. Instead of …
generalization in many downstream tasks with properly designed text prompts. Instead of …
Continual test-time domain adaptation
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain
without using any source data. Existing works mainly consider the case where the target …
without using any source data. Existing works mainly consider the case where the target …
Domain adaptation for medical image analysis: a survey
Machine learning techniques used in computer-aided medical image analysis usually suffer
from the domain shift problem caused by different distributions between source/reference …
from the domain shift problem caused by different distributions between source/reference …
Memo: Test time robustness via adaptation and augmentation
While deep neural networks can attain good accuracy on in-distribution test points, many
applications require robustness even in the face of unexpected perturbations in the input …
applications require robustness even in the face of unexpected perturbations in the input …
Fedbn: Federated learning on non-iid features via local batch normalization
The emerging paradigm of federated learning (FL) strives to enable collaborative training of
deep models on the network edge without centrally aggregating raw data and hence …
deep models on the network edge without centrally aggregating raw data and hence …
Tent: Fully test-time adaptation by entropy minimization
A model must adapt itself to generalize to new and different data during testing. In this
setting of fully test-time adaptation the model has only the test data and its own parameters …
setting of fully test-time adaptation the model has only the test data and its own parameters …
Improving robustness against common corruptions by covariate shift adaptation
Today's state-of-the-art machine vision models are vulnerable to image corruptions like
blurring or compression artefacts, limiting their performance in many real-world applications …
blurring or compression artefacts, limiting their performance in many real-world applications …
Wilds: A benchmark of in-the-wild distribution shifts
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …