[PDF][PDF] Deep unsupervised domain adaptation: A review of recent advances and perspectives

X Liu, C Yoo, F **ng, H Oh, G El Fakhri… - … on Signal and …, 2022 - nowpublishers.com
Deep learning has become the method of choice to tackle real-world problems in different
domains, partly because of its ability to learn from data and achieve impressive performance …

Domain generalization: A survey

K Zhou, Z Liu, Y Qiao, T **ang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …

Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need

DW Zhou, ZW Cai, HJ Ye, DC Zhan, Z Liu - arxiv preprint arxiv …, 2023 - arxiv.org
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting
old ones. Traditional CIL models are trained from scratch to continually acquire knowledge …

Contrastive test-time adaptation

D Chen, D Wang, T Darrell… - Proceedings of the …, 2022 - openaccess.thecvf.com
Test-time adaptation is a special setting of unsupervised domain adaptation where a trained
model on the source domain has to adapt to the target domain without accessing source …

Deep long-tailed learning: A survey

Y Zhang, B Kang, B Hooi, S Yan… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …

Fine-tuning can distort pretrained features and underperform out-of-distribution

A Kumar, A Raghunathan, R Jones, T Ma… - arxiv preprint arxiv …, 2022 - arxiv.org
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arxiv preprint arxiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Robust test-time adaptation in dynamic scenarios

L Yuan, B **e, S Li - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with
only unlabeled test data streams. Most of the previous TTA methods have achieved great …

S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning

Y Wang, Z Huang, X Hong - Advances in Neural …, 2022 - proceedings.neurips.cc
State-of-the-art deep neural networks are still struggling to address the catastrophic
forgetting problem in continual learning. In this paper, we propose one simple paradigm …