Last layer re-training is sufficient for robustness to spurious correlations

P Kirichenko, P Izmailov, AG Wilson - arxiv preprint arxiv:2204.02937, 2022 - arxiv.org
Neural network classifiers can largely rely on simple spurious features, such as
backgrounds, to make predictions. However, even in these cases, we show that they still …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …

On feature learning in the presence of spurious correlations

P Izmailov, P Kirichenko, N Gruver… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep classifiers are known to rely on spurious features—patterns which are correlated with
the target on the training data but not inherently relevant to the learning problem, such as the …

Learning causally invariant representations for out-of-distribution generalization on graphs

Y Chen, Y Zhang, Y Bian, H Yang… - Advances in …, 2022 - proceedings.neurips.cc
Despite recent success in using the invariance principle for out-of-distribution (OOD)
generalization on Euclidean data (eg, images), studies on graph data are still limited …

Discover and cure: Concept-aware mitigation of spurious correlation

S Wu, M Yuksekgonul, L Zhang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Deep neural networks often rely on spurious correlations to make predictions, which hinders
generalization beyond training environments. For instance, models that associate cats with …

Towards last-layer retraining for group robustness with fewer annotations

T LaBonte, V Muthukumar… - Advances in Neural …, 2023 - proceedings.neurips.cc
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious
correlations and poor generalization on minority groups. The recent deep feature …

Spurious correlations in machine learning: A survey

W Ye, G Zheng, X Cao, Y Ma, A Zhang - arxiv preprint arxiv:2402.12715, 2024 - arxiv.org
Machine learning systems are known to be sensitive to spurious correlations between non-
essential features of the inputs (eg, background, texture, and secondary objects) and the …

Simple and fast group robustness by automatic feature reweighting

S Qiu, A Potapczynski, P Izmailov… - … on Machine Learning, 2023 - proceedings.mlr.press
A major challenge to out-of-distribution generalization is reliance on spurious features—
patterns that are predictive of the class label in the training data distribution, but not causally …

Out-of-distribution generalization on graphs: A survey

H Li, X Wang, Z Zhang, W Zhu - arxiv preprint arxiv:2202.07987, 2022 - arxiv.org
Graph machine learning has been extensively studied in both academia and industry.
Although booming with a vast number of emerging methods and techniques, most of the …

Mind the label shift of augmentation-based graph ood generalization

J Yu, J Liang, R He - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Abstract Out-of-distribution (OOD) generalization is an important issue for Graph Neural
Networks (GNNs). Recent works employ different graph editions to generate augmented …