Deep long-tailed learning: A survey

Y Zhang, B Kang, B Hooi, S Yan… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …

Just train twice: Improving group robustness without training group information

EZ Liu, B Haghgoo, AS Chen… - International …, 2021 - proceedings.mlr.press
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …

Wilds: A benchmark of in-the-wild distribution shifts

PW Koh, S Sagawa, H Marklund… - International …, 2021 - proceedings.mlr.press
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …

Self-supervised learning is more robust to dataset imbalance

H Liu, JZ HaoChen, A Gaidon, T Ma - arxiv preprint arxiv:2110.05025, 2021 - arxiv.org
Self-supervised learning (SSL) is a scalable way to learn general visual representations
since it learns without labels. However, large-scale unlabeled datasets in the wild often have …

Open-world semi-supervised learning

K Cao, M Brbic, J Leskovec - arxiv preprint arxiv:2102.03526, 2021 - arxiv.org
A fundamental limitation of applying semi-supervised learning in real-world settings is the
assumption that unlabeled test data contains only classes previously encountered in the …

Fine samples for learning with noisy labels

T Kim, J Ko, JH Choi, SY Yun - Advances in Neural …, 2021 - proceedings.neurips.cc
Modern deep neural networks (DNNs) become frail when the datasets contain noisy
(incorrect) class labels. Robust techniques in the presence of noisy labels can be …

Robust learning with progressive data expansion against spurious correlation

Y Deng, Y Yang, B Mirzasoleiman… - Advances in neural …, 2023 - proceedings.neurips.cc
While deep learning models have shown remarkable performance in various tasks, they are
susceptible to learning non-generalizable _spurious features_ rather than the core features …

Coresets for robust training of deep neural networks against noisy labels

B Mirzasoleiman, K Cao… - Advances in Neural …, 2020 - proceedings.neurips.cc
Modern neural networks have the capacity to overfit noisy labels frequently found in real-
world datasets. Although great progress has been made, existing techniques are very …

Investigating why contrastive learning benefits robustness against label noise

Y Xue, K Whitecross… - … Conference on Machine …, 2022 - proceedings.mlr.press
Abstract Self-supervised Contrastive Learning (CL) has been recently shown to be very
effective in preventing deep networks from overfitting noisy labels. Despite its empirical …