How to certify machine learning based safety-critical systems? A systematic literature review

F Tambon, G Laberge, L An, A Nikanjam… - Automated Software …, 2022 - Springer
Abstract Context Machine Learning (ML) has been at the heart of many innovations over the
past years. However, including it in so-called “safety-critical” systems such as automotive or …

The many faces of robustness: A critical analysis of out-of-distribution generalization

D Hendrycks, S Basart, N Mu… - Proceedings of the …, 2021 - openaccess.thecvf.com
We introduce four new real-world distribution shift datasets consisting of changes in image
style, image blurriness, geographic location, camera operation, and more. With our new …

Improving robustness against common corruptions by covariate shift adaptation

S Schneider, E Rusak, L Eck… - Advances in neural …, 2020 - proceedings.neurips.cc
Today's state-of-the-art machine vision models are vulnerable to image corruptions like
blurring or compression artefacts, limiting their performance in many real-world applications …

The origins and prevalence of texture bias in convolutional neural networks

K Hermann, T Chen, S Kornblith - Advances in Neural …, 2020 - proceedings.neurips.cc
Recent work has indicated that, unlike humans, ImageNet-trained CNNs tend to classify
images by texture rather than by shape. How pervasive is this bias, and where does it come …

Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations

J Dapello, T Marques, M Schrimpf… - Advances in …, 2020 - proceedings.neurips.cc
Current state-of-the-art object recognition models are largely based on convolutional neural
network (CNN) architectures, which are loosely inspired by the primate visual system …

Rdumb: A simple approach that questions our progress in continual test-time adaptation

O Press, S Schneider, M Kümmerer… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Test-Time Adaptation (TTA) allows to update pre-trained models to changing data
distributions at deployment time. While early work tested these algorithms for individual fixed …

Learning perturbation sets for robust machine learning

E Wong, JZ Kolter - arxiv preprint arxiv:2007.08450, 2020 - arxiv.org
Although much progress has been made towards robust deep learning, a significant gap in
robustness remains between real-world perturbations and more narrowly defined sets …

Test-time adaptation to distribution shift by confidence maximization and input transformation

CK Mummadi, R Hutmacher, K Rambach… - arxiv preprint arxiv …, 2021 - arxiv.org
Deep neural networks often exhibit poor performance on data that is unlikely under the train-
time data distribution, for instance data affected by corruptions. Previous works demonstrate …

Robustness analysis of video-language models against visual and language perturbations

M Schiappa, S Vyas, H Palangi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Joint visual and language modeling on large-scale datasets has recently shown good
progress in multi-modal tasks when compared to single modal learning. However …

Benchmarking the robustness of semantic segmentation models with respect to common corruptions

C Kamann, C Rother - International journal of computer vision, 2021 - Springer
When designing a semantic segmentation model for a real-world application, such as
autonomous driving, it is crucial to understand the robustness of the network with respect to …