[PDF][PDF] Deep unsupervised domain adaptation: A review of recent advances and perspectives
Deep learning has become the method of choice to tackle real-world problems in different
domains, partly because of its ability to learn from data and achieve impressive performance …
domains, partly because of its ability to learn from data and achieve impressive performance …
Domain generalization: A survey
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …
challenging for machines to reproduce. This is because most learning algorithms strongly …
Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting
old ones. Traditional CIL models are trained from scratch to continually acquire knowledge …
old ones. Traditional CIL models are trained from scratch to continually acquire knowledge …
Contrastive test-time adaptation
Test-time adaptation is a special setting of unsupervised domain adaptation where a trained
model on the source domain has to adapt to the target domain without accessing source …
model on the source domain has to adapt to the target domain without accessing source …
Deep long-tailed learning: A survey
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …
to train well-performing deep models from a large number of images that follow a long-tailed …
Fine-tuning can distort pretrained features and underperform out-of-distribution
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …
fine-tuning (updating all the model parameters) and linear probing (updating only the last …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Towards out-of-distribution generalization: A survey
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …
test data follow the same statistical pattern, which is mathematically referred to as …
Robust test-time adaptation in dynamic scenarios
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with
only unlabeled test data streams. Most of the previous TTA methods have achieved great …
only unlabeled test data streams. Most of the previous TTA methods have achieved great …
S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning
State-of-the-art deep neural networks are still struggling to address the catastrophic
forgetting problem in continual learning. In this paper, we propose one simple paradigm …
forgetting problem in continual learning. In this paper, we propose one simple paradigm …