A comprehensive survey on test-time adaptation under distribution shifts
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …
process that can effectively generalize to test samples, even in the presence of distribution …
Test-time prompt tuning for zero-shot generalization in vision-language models
Pre-trained vision-language models (eg, CLIP) have shown promising zero-shot
generalization in many downstream tasks with properly designed text prompts. Instead of …
generalization in many downstream tasks with properly designed text prompts. Instead of …
Test-time training with masked autoencoders
Test-time training adapts to a new test distribution on the fly by optimizing a model for each
test input using self-supervision. In this paper, we use masked autoencoders for this one …
test input using self-supervision. In this paper, we use masked autoencoders for this one …
Efficient test-time model adaptation without forgetting
Test-time adaptation provides an effective means of tackling the potential distribution shift
between model training and inference, by dynamically updating the model at test time. This …
between model training and inference, by dynamically updating the model at test time. This …
Robust test-time adaptation in dynamic scenarios
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with
only unlabeled test data streams. Most of the previous TTA methods have achieved great …
only unlabeled test data streams. Most of the previous TTA methods have achieved great …
Memo: Test time robustness via adaptation and augmentation
While deep neural networks can attain good accuracy on in-distribution test points, many
applications require robustness even in the face of unexpected perturbations in the input …
applications require robustness even in the face of unexpected perturbations in the input …
Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization
The promising zero-shot generalization of vision-language models such as CLIP has led to
their adoption using prompt learning for numerous downstream tasks. Previous works have …
their adoption using prompt learning for numerous downstream tasks. Previous works have …
Robust mean teacher for continual and gradual test-time adaptation
Since experiencing domain shifts during test-time is inevitable in practice, test-time adaption
(TTA) continues to adapt the model after deployment. Recently, the area of continual and …
(TTA) continues to adapt the model after deployment. Recently, the area of continual and …
Note: Robust continual test-time adaptation against temporal correlation
Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts
between training and testing phases without additional data acquisition or labeling cost; only …
between training and testing phases without additional data acquisition or labeling cost; only …
Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization
This paper presents a simple yet effective approach that improves continual test-time
adaptation (TTA) in a memory-efficient manner. TTA may primarily be conducted on edge …
adaptation (TTA) in a memory-efficient manner. TTA may primarily be conducted on edge …