A review of single-source deep unsupervised visual domain adaptation

S Zhao, X Yue, S Zhang, B Li, H Zhao… - … on Neural Networks …, 2020‏ - ieeexplore.ieee.org
Large-scale labeled training datasets have enabled deep neural networks to excel across a
wide range of benchmark vision tasks. However, in many applications, it is prohibitively …

Generalizing to unseen domains: A survey on domain generalization

J Wang, C Lan, C Liu, Y Ouyang, T Qin… - IEEE transactions on …, 2022‏ - ieeexplore.ieee.org
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …

Fishr: Invariant gradient variances for out-of-distribution generalization

A Rame, C Dancette, M Cord - International Conference on …, 2022‏ - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …

A fine-grained analysis on distribution shift

O Wiles, S Gowal, F Stimberg, S Alvise-Rebuffi… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Robustness to distribution shifts is critical for deploying machine learning models in the real
world. Despite this necessity, there has been little work in defining the underlying …

Self-supervised augmentation consistency for adapting semantic segmentation

N Araslanov, S Roth - … of the IEEE/CVF conference on …, 2021‏ - openaccess.thecvf.com
We propose an approach to domain adaptation for semantic segmentation that is both
practical and highly accurate. In contrast to previous work, we abandon the use of …

Invariant risk minimization

M Arjovsky, L Bottou, I Gulrajani… - arxiv preprint arxiv …, 2019‏ - arxiv.org
We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant
correlations across multiple training distributions. To achieve this goal, IRM learns a data …

Learning robust global representations by penalizing local predictive power

H Wang, S Ge, Z Lipton… - Advances in neural …, 2019‏ - proceedings.neurips.cc
Despite their renowned in-domain predictive power, convolutional neural networks are
known to rely more on high-frequency patterns that humans deem superficial than on low …

The risks of invariant risk minimization

E Rosenfeld, P Ravikumar, A Risteski - arxiv preprint arxiv:2010.05761, 2020‏ - arxiv.org
Invariant Causal Prediction (Peters et al., 2016) is a technique for out-of-distribution
generalization which assumes that some aspects of the data distribution vary across the …

Cycle self-training for domain adaptation

H Liu, J Wang, M Long - Advances in Neural Information …, 2021‏ - proceedings.neurips.cc
Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant
representations to narrow the domain shift, which are empirically effective but theoretically …

Domain generalization using causal matching

D Mahajan, S Tople, A Sharma - … conference on machine …, 2021‏ - proceedings.mlr.press
In the domain generalization literature, a common objective is to learn representations
independent of the domain after conditioning on the class label. We show that this objective …