A survey of algorithmic recourse: contrastive explanations and consequential recommendations

AH Karimi, G Barthe, B Schölkopf, I Valera - ACM Computing Surveys, 2022 - dl.acm.org
Machine learning is increasingly used to inform decision making in sensitive situations
where decisions have consequential effects on individuals' lives. In these settings, in …

Recent advances in adversarial training for adversarial robustness

T Bai, J Luo, J Zhao, B Wen, Q Wang - arxiv preprint arxiv:2102.01356, 2021 - arxiv.org
Adversarial training is one of the most effective approaches defending against adversarial
examples for deep learning models. Unlike other defense strategies, adversarial training …

Better diffusion models further improve adversarial training

Z Wang, T Pang, C Du, M Lin… - … on Machine Learning, 2023 - proceedings.mlr.press
It has been recognized that the data generated by the denoising diffusion probabilistic
model (DDPM) improves adversarial training. After two years of rapid development in …

Cross-entropy loss functions: Theoretical analysis and applications

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arxiv preprint arxiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Analyzing and mitigating object hallucination in large vision-language models

Y Zhou, C Cui, J Yoon, L Zhang, Z Deng, C Finn… - arxiv preprint arxiv …, 2023 - arxiv.org
Large vision-language models (LVLMs) have shown remarkable abilities in understanding
visual information with human languages. However, LVLMs still suffer from object …

Improving robustness using generated data

S Gowal, SA Rebuffi, O Wiles… - Advances in …, 2021 - proceedings.neurips.cc
Recent work argues that robust training requires substantially larger datasets than those
required for standard classification. On CIFAR-10 and CIFAR-100, this translates into a …

Data augmentation can improve robustness

SA Rebuffi, S Gowal, DA Calian… - Advances in …, 2021 - proceedings.neurips.cc
Adversarial training suffers from robust overfitting, a phenomenon where the robust test
accuracy starts to decrease during training. In this paper, we focus on reducing robust …

Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization

JP Miller, R Taori, A Raghunathan… - International …, 2021 - proceedings.mlr.press
For machine learning systems to be reliable, we must understand their performance in
unseen, out-of-distribution environments. In this paper, we empirically show that out-of …

Unsolved problems in ml safety

D Hendrycks, N Carlini, J Schulman… - arxiv preprint arxiv …, 2021 - arxiv.org
Machine learning (ML) systems are rapidly increasing in size, are acquiring new
capabilities, and are increasingly deployed in high-stakes settings. As with other powerful …