A survey on adversarial deep learning robustness in medical image analysis

KD Apostolidis, GA Papakostas - Electronics, 2021 - mdpi.com
In the past years, deep neural networks (DNN) have become popular in many disciplines
such as computer vision (CV), natural language processing (NLP), etc. The evolution of …

[HTML][HTML] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

S Ali, T Abuhmed, S El-Sappagh, K Muhammad… - Information fusion, 2023 - Elsevier
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated
applications, but the outcomes of many AI models are challenging to comprehend and trust …

Backdoorbench: A comprehensive benchmark of backdoor learning

B Wu, H Chen, M Zhang, Z Zhu, S Wei… - Advances in …, 2022 - proceedings.neurips.cc
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …

On the adversarial robustness of vision transformers

R Shao, Z Shi, J Yi, PY Chen, CJ Hsieh - arxiv preprint arxiv:2103.15670, 2021 - arxiv.org
Following the success in advancing natural language processing and understanding,
transformers are expected to bring revolutionary changes to computer vision. This work …

[HTML][HTML] A comprehensive survey of robust deep learning in computer vision

J Liu, Y ** - Journal of Automation and Intelligence, 2023 - Elsevier
Deep learning has presented remarkable progress in various tasks. Despite the excellent
performance, deep learning models remain not robust, especially to well-designed …

Assaying out-of-distribution generalization in transfer learning

F Wenzel, A Dittadi, P Gehler… - Advances in …, 2022 - proceedings.neurips.cc
Since out-of-distribution generalization is a generally ill-posed problem, various proxy
targets (eg, calibration, adversarial robustness, algorithmic corruptions, invariance across …

Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations

J Dapello, T Marques, M Schrimpf… - Advances in …, 2020 - proceedings.neurips.cc
Current state-of-the-art object recognition models are largely based on convolutional neural
network (CNN) architectures, which are loosely inspired by the primate visual system …

Adversarial robustness comparison of vision transformer and mlp-mixer to cnns

P Benz, S Ham, C Zhang, A Karjauv… - arxiv preprint arxiv …, 2021 - arxiv.org
Convolutional Neural Networks (CNNs) have become the de facto gold standard in
computer vision applications in the past years. Recently, however, new model architectures …

Fmix: Enhancing mixed sample data augmentation

E Harris, A Marcu, M Painter, M Niranjan… - arxiv preprint arxiv …, 2020 - arxiv.org
Mixed Sample Data Augmentation (MSDA) has received increasing attention in recent
years, with many successful variants such as MixUp and CutMix. By studying the mutual …

Surfree: a fast surrogate-free black-box attack

T Maho, T Furon, E Le Merrer - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Abstract Machine learning classifiers are critically prone to evasion attacks. Adversarial
examples are slightly modified inputs that are then misclassified, while remaining …