Last layer re-training is sufficient for robustness to spurious correlations

P Kirichenko, P Izmailov, AG Wilson - arxiv preprint arxiv:2204.02937, 2022 - arxiv.org
Neural network classifiers can largely rely on simple spurious features, such as
backgrounds, to make predictions. However, even in these cases, we show that they still …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …

Discover and cure: Concept-aware mitigation of spurious correlation

S Wu, M Yuksekgonul, L Zhang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Deep neural networks often rely on spurious correlations to make predictions, which hinders
generalization beyond training environments. For instance, models that associate cats with …

Wild-time: A benchmark of in-the-wild distribution shift over time

H Yao, C Choi, B Cao, Y Lee… - Advances in Neural …, 2022 - proceedings.neurips.cc
Distribution shifts occur when the test distribution differs from the training distribution, and
can considerably degrade performance of machine learning models deployed in the real …

Predictive overfitting in immunological applications: Pitfalls and solutions

JP Gygi, SH Kleinstein, L Guan - Human Vaccines & …, 2023 - Taylor & Francis
Overfitting describes the phenomenon where a highly predictive model on the training data
generalizes poorly to future observations. It is a common concern when applying machine …

DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery--A Focus on Affinity Prediction Problems with Noise Annotations

Y Ji, L Zhang, J Wu, B Wu, LK Huang, T Xu… - arxiv preprint arxiv …, 2022 - arxiv.org
AI-aided drug discovery (AIDD) is gaining increasing popularity due to its promise of making
the search for new pharmaceuticals quicker, cheaper and more efficient. In spite of its …

Domain adaptation under open set label shift

S Garg, S Balakrishnan… - Advances in Neural …, 2022 - proceedings.neurips.cc
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS), where
the label distribution can change arbitrarily and a new class may arrive during deployment …

Brave the wind and the waves: Discovering robust and generalizable graph lottery tickets

K Wang, Y Liang, X Li, G Li, B Ghanem… - … on Pattern Analysis …, 2023 - ieeexplore.ieee.org
The training and inference of Graph Neural Networks (GNNs) are costly when scaling up to
large-scale graphs. Graph Lottery Ticket (GLT) has presented the first attempt to accelerate …

Benchmarking distribution shift in tabular data with tableshift

J Gardner, Z Popovic, L Schmidt - Advances in Neural …, 2024 - proceedings.neurips.cc
Robustness to distribution shift has become a growing concern for text and image models as
they transition from research subjects to deployment in the real world. However, high-quality …

Robust learning with progressive data expansion against spurious correlation

Y Deng, Y Yang, B Mirzasoleiman… - Advances in neural …, 2024 - proceedings.neurips.cc
While deep learning models have shown remarkable performance in various tasks, they are
susceptible to learning non-generalizable _spurious features_ rather than the core features …