Towards out-of-distribution generalization: A survey
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …
test data follow the same statistical pattern, which is mathematically referred to as …
Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
Discovering invariant rationales for graph neural networks
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …
Invariance principle meets information bottleneck for out-of-distribution generalization
The invariance principle from causality is at the heart of notable approaches such as
invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) …
invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) …
How neural networks extrapolate: From feedforward to graph neural networks
We study how neural networks trained by gradient descent extrapolate, ie, what they learn
outside the support of the training distribution. Previous works report mixed empirical results …
outside the support of the training distribution. Previous works report mixed empirical results …
Sparse invariant risk minimization
Abstract Invariant Risk Minimization (IRM) is an emerging invariant feature extracting
technique to help generalization with distributional shift. However, we find that there exists a …
technique to help generalization with distributional shift. However, we find that there exists a …
Causal attention for unbiased visual recognition
Attention module does not always help deep models learn causal features that are robust in
any confounding context, eg, a foreground object feature is invariant to different …
any confounding context, eg, a foreground object feature is invariant to different …
Model-based domain generalization
Despite remarkable success in a variety of applications, it is well-known that deep learning
can fail catastrophically when presented with out-of-distribution data. Toward addressing …
can fail catastrophically when presented with out-of-distribution data. Toward addressing …
Towards a theoretical framework of out-of-distribution generalization
Generalization to out-of-distribution (OOD) data is one of the central problems in modern
machine learning. Recently, there is a surge of attempts to propose algorithms that mainly …
machine learning. Recently, there is a surge of attempts to propose algorithms that mainly …
Bayesian invariant risk minimization
Generalization under distributional shift is an open challenge for machine learning. Invariant
Risk Minimization (IRM) is a promising framework to tackle this issue by extracting invariant …
Risk Minimization (IRM) is a promising framework to tackle this issue by extracting invariant …