3d common corruptions and data augmentation

OF Kar, T Yeo, A Atanov… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
We introduce a set of image transformations that can be used as corruptions to evaluate the
robustness of models as well as data augmentation mechanisms for training neural …

A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others

Z Li, I Evtimov, A Gordo, C Hazirbas… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Machine learning models have been found to learn shortcuts---unintended decision
rules that are unable to generalize---undermining models' reliability. Previous works address …

Datamodels: Predicting predictions from training data

A Ilyas, SM Park, L Engstrom, G Leclerc… - arxiv preprint arxiv …, 2022 - arxiv.org
We present a conceptual framework, datamodeling, for analyzing the behavior of a model
class in terms of the training data. For any fixed" target" example $ x $, training set $ S $, and …

Salient imagenet: How to discover spurious features in deep learning?

S Singla, S Feizi - arxiv preprint arxiv:2110.04301, 2021 - arxiv.org
Deep neural networks can be unreliable in the real world especially when they heavily use
{\it spurious} features for their predictions. Focusing on image classifications, we define {\it …

Adaptive testing of computer vision models

I Gao, G Ilharco, S Lundberg… - Proceedings of the …, 2023 - openaccess.thecvf.com
Vision models often fail systematically on groups of data that share common semantic
characteristics (eg, rare objects or unusual scenes), but identifying these failure modes is a …

Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts

A Kolides, A Nawaz, A Rathor, D Beeman… - … Modelling Practice and …, 2023 - Elsevier
With the emergence of foundation models (FMs) that are trained on large amounts of data at
scale and adaptable to a wide range of downstream applications, AI is experiencing a …

Red teaming deep neural networks with feature synthesis tools

S Casper, T Bu, Y Li, J Li, K Zhang… - Advances in …, 2023 - proceedings.neurips.cc
Interpretable AI tools are often motivated by the goal of understanding model behavior in out-
of-distribution (OOD) contexts. Despite the attention this area of study receives, there are …

Modeldiff: A framework for comparing learning algorithms

H Shah, SM Park, A Ilyas… - … Conference on Machine …, 2023 - proceedings.mlr.press
We study the problem of (learning) algorithm comparison, where the goal is to find
differences between models trained with two different learning algorithms. We begin by …

Diagnosing and rectifying vision models using language

Y Zhang, JZ HaoChen, SC Huang, KC Wang… - arxiv preprint arxiv …, 2023 - arxiv.org
Recent multi-modal contrastive learning models have demonstrated the ability to learn an
embedding space suitable for building strong vision classifiers, by leveraging the rich …

Dataset interfaces: Diagnosing model failures using controllable counterfactual generation

J Vendrow, S Jain, L Engstrom, A Madry - arxiv preprint arxiv:2302.07865, 2023 - arxiv.org
Distribution shift is a major source of failure for machine learning models. However,
evaluating model reliability under distribution shift can be challenging, especially since it …