Domain generalization: A survey
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …
challenging for machines to reproduce. This is because most learning algorithms strongly …
Kernel mean embedding of distributions: A review and beyond
A Hilbert space embedding of a distribution—in short, a kernel mean embedding—has
recently emerged as a powerful tool for machine learning and statistical inference. The basic …
recently emerged as a powerful tool for machine learning and statistical inference. The basic …
Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
In search of lost domain generalization
The goal of domain generalization algorithms is to predict well on distributions different from
those seen during training. While a myriad of domain generalization algorithms exist …
those seen during training. While a myriad of domain generalization algorithms exist …
Just train twice: Improving group robustness without training group information
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …
low error on average but high error on minority groups, especially in the presence of …
Towards out-of-distribution generalization: A survey
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …
test data follow the same statistical pattern, which is mathematically referred to as …
Domain generalization with mixstyle
Though convolutional neural networks (CNNs) have demonstrated remarkable ability in
learning discriminative features, they often generalize poorly to unseen domains. Domain …
learning discriminative features, they often generalize poorly to unseen domains. Domain …
Wilds: A benchmark of in-the-wild distribution shifts
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …
Measuring robustness to natural distribution shifts in image classification
We study how robust current ImageNet models are to distribution shifts arising from natural
variations in datasets. Most research on robustness focuses on synthetic image …
variations in datasets. Most research on robustness focuses on synthetic image …
Last layer re-training is sufficient for robustness to spurious correlations
Neural network classifiers can largely rely on simple spurious features, such as
backgrounds, to make predictions. However, even in these cases, we show that they still …
backgrounds, to make predictions. However, even in these cases, we show that they still …