Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Generalizing to unseen domains: A survey on domain generalization

J Wang, C Lan, C Liu, Y Ouyang, T Qin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …

Discovering invariant rationales for graph neural networks

YX Wu, X Wang, A Zhang, X He, TS Chua - arxiv preprint arxiv …, 2022 - arxiv.org
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …

Invariance principle meets information bottleneck for out-of-distribution generalization

K Ahuja, E Caballero, D Zhang… - Advances in …, 2021 - proceedings.neurips.cc
The invariance principle from causality is at the heart of notable approaches such as
invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) …

How neural networks extrapolate: From feedforward to graph neural networks

K Xu, M Zhang, J Li, SS Du, K Kawarabayashi… - arxiv preprint arxiv …, 2020 - arxiv.org
We study how neural networks trained by gradient descent extrapolate, ie, what they learn
outside the support of the training distribution. Previous works report mixed empirical results …

Sparse invariant risk minimization

X Zhou, Y Lin, W Zhang… - … Conference on Machine …, 2022 - proceedings.mlr.press
Abstract Invariant Risk Minimization (IRM) is an emerging invariant feature extracting
technique to help generalization with distributional shift. However, we find that there exists a …

Causal attention for unbiased visual recognition

T Wang, C Zhou, Q Sun… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Attention module does not always help deep models learn causal features that are robust in
any confounding context, eg, a foreground object feature is invariant to different …

Model-based domain generalization

A Robey, GJ Pappas… - Advances in Neural …, 2021 - proceedings.neurips.cc
Despite remarkable success in a variety of applications, it is well-known that deep learning
can fail catastrophically when presented with out-of-distribution data. Toward addressing …

Towards a theoretical framework of out-of-distribution generalization

H Ye, C **e, T Cai, R Li, Z Li… - Advances in Neural …, 2021 - proceedings.neurips.cc
Generalization to out-of-distribution (OOD) data is one of the central problems in modern
machine learning. Recently, there is a surge of attempts to propose algorithms that mainly …

Bayesian invariant risk minimization

Y Lin, H Dong, H Wang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Generalization under distributional shift is an open challenge for machine learning. Invariant
Risk Minimization (IRM) is a promising framework to tackle this issue by extracting invariant …