Resilience and security of deep neural networks against intentional and unintentional perturbations: Survey and research challenges

S Sayyed, M Zhang, S Rifat, A Swami… - arxiv preprint arxiv …, 2024 - arxiv.org
In order to deploy deep neural networks (DNNs) in high-stakes scenarios, it is imperative
that DNNs provide inference robust to external perturbations-both intentional and …

Revisiting the adversarial robustness of vision language models: a multimodal perspective

W Zhou, S Bai, DP Mandic, Q Zhao, B Chen - arxiv preprint arxiv …, 2024 - arxiv.org
Pretrained vision-language models (VLMs) like CLIP exhibit exceptional generalization
across diverse downstream tasks. While recent studies reveal their vulnerability to …

Attention-based investigation and solution to the trade-off issue of adversarial training

C Shao, W Li, J Huo, Z Feng, Y Gao - Neural Networks, 2024 - Elsevier
Adversarial training has become the mainstream method to boost adversarial robustness of
deep models. However, it often suffers from the trade-off dilemma, where the use of …

Artificial Immune System of Secure Face Recognition Against Adversarial Attacks

M Ren, Y Wang, Y Zhu, Y Huang, Z Sun, Q Li… - International Journal of …, 2024 - Springer
Deep learning-based face recognition models are vulnerable to adversarial attacks. In
contrast to general noises, the presence of imperceptible adversarial noises can lead to …

On the limitations of adversarial training for robust image classification with convolutional neural networks

M Carletti, A Sinigaglia, M Terzi, GA Susto - Information Sciences, 2024 - Elsevier
Adversarial Training has proved to be an effective training paradigm to enforce robustness
against adversarial examples in modern neural network architectures. Despite many efforts …