Ethical machine learning in healthcare

IY Chen, E Pierson, S Rose, S Joshi… - Annual review of …, 2021 - annualreviews.org
The use of machine learning (ML) in healthcare raises numerous ethical concerns,
especially as models can amplify existing health inequities. Here, we outline ethical …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arxiv preprint arxiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Just train twice: Improving group robustness without training group information

EZ Liu, B Haghgoo, AS Chen… - International …, 2021 - proceedings.mlr.press
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …

AI for radiographic COVID-19 detection selects shortcuts over signal

AJ DeGrave, JD Janizek, SI Lee - Nature Machine Intelligence, 2021 - nature.com
Artificial intelligence (AI) researchers and radiologists have recently reported AI systems that
accurately detect COVID-19 in chest radiographs. However, the robustness of these systems …

On feature learning in the presence of spurious correlations

P Izmailov, P Kirichenko, N Gruver… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep classifiers are known to rely on spurious features—patterns which are correlated with
the target on the training data but not inherently relevant to the learning problem, such as the …

Fishr: Invariant gradient variances for out-of-distribution generalization

A Rame, C Dancette, M Cord - International Conference on …, 2022 - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …

Gradient starvation: A learning proclivity in neural networks

M Pezeshki, O Kaba, Y Bengio… - Advances in …, 2021 - proceedings.neurips.cc
We identify and formalize a fundamental gradient descent phenomenon resulting in a
learning proclivity in over-parameterized neural networks. Gradient Starvation arises when …

Wilds: A benchmark of in-the-wild distribution shifts

PW Koh, S Sagawa, H Marklund… - International …, 2021 - proceedings.mlr.press
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …