Domain generalization: A survey

K Zhou, Z Liu, Y Qiao, T **ang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …

Kernel mean embedding of distributions: A review and beyond

K Muandet, K Fukumizu… - … and Trends® in …, 2017 - nowpublishers.com
A Hilbert space embedding of a distribution—in short, a kernel mean embedding—has
recently emerged as a powerful tool for machine learning and statistical inference. The basic …

Generalizing to unseen domains: A survey on domain generalization

J Wang, C Lan, C Liu, Y Ouyang, T Qin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …

In search of lost domain generalization

I Gulrajani, D Lopez-Paz - arxiv preprint arxiv:2007.01434, 2020 - arxiv.org
The goal of domain generalization algorithms is to predict well on distributions different from
those seen during training. While a myriad of domain generalization algorithms exist …

Just train twice: Improving group robustness without training group information

EZ Liu, B Haghgoo, AS Chen… - International …, 2021 - proceedings.mlr.press
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Domain generalization with mixstyle

K Zhou, Y Yang, Y Qiao, T **ang - arxiv preprint arxiv:2104.02008, 2021 - arxiv.org
Though convolutional neural networks (CNNs) have demonstrated remarkable ability in
learning discriminative features, they often generalize poorly to unseen domains. Domain …

Wilds: A benchmark of in-the-wild distribution shifts

PW Koh, S Sagawa, H Marklund… - International …, 2021 - proceedings.mlr.press
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …

Measuring robustness to natural distribution shifts in image classification

R Taori, A Dave, V Shankar, N Carlini… - Advances in …, 2020 - proceedings.neurips.cc
We study how robust current ImageNet models are to distribution shifts arising from natural
variations in datasets. Most research on robustness focuses on synthetic image …

Last layer re-training is sufficient for robustness to spurious correlations

P Kirichenko, P Izmailov, AG Wilson - arxiv preprint arxiv:2204.02937, 2022 - arxiv.org
Neural network classifiers can largely rely on simple spurious features, such as
backgrounds, to make predictions. However, even in these cases, we show that they still …