How deep learning sees the world: A survey on adversarial attacks & defenses

JC Costa, T Roxo, H Proença, PRM Inácio - IEEE Access, 2024 - ieeexplore.ieee.org
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …

Texture re-scalable universal adversarial perturbation

Y Huang, Q Guo, F Juefei-Xu, M Hu… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Universal adversarial perturbation (UAP), also known as image-agnostic perturbation, is a
fixed perturbation map that can fool the classifier with high probabilities on arbitrary images …

Transferable Structural Sparse Adversarial Attack Via Exact Group Sparsity Training

D Ming, P Ren, Y Wang, X Feng - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Deep neural networks (DNNs) are vulnerable to highly transferable adversarial attacks.
Especially many studies have shown that sparse attacks pose a significant threat to DNNs …

Rethinking impersonation and dodging attacks on face recognition systems

F Zhou, Q Zhou, B Yin, H Zheng, X Lu, L Ma… - Proceedings of the 32nd …, 2024 - dl.acm.org
Face Recognition (FR) systems can be easily deceived by adversarial examples that
manipulate benign face images through imperceptible perturbations. Adversarial attacks on …

Adversarial attacks on both face recognition and face anti-spoofing models

F Zhou, Q Zhou, X Li, X Lu, L Ma, H Ling - arxiv preprint arxiv:2405.16940, 2024 - arxiv.org
Adversarial attacks on Face Recognition (FR) systems have proven highly effective in
compromising pure FR models, yet adversarial examples may be ineffective to the complete …

Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning

D Ming, P Ren, Y Wang, X Feng - Advances in Neural …, 2025 - proceedings.neurips.cc
Vision transformers (ViTs) perform exceptionally well in various computer vision tasks but
remain vulnerable to adversarial attacks. Recent studies have shown that the transferability …

Imperceptible Face Forgery Attack via Adversarial Semantic Mask

D Liu, Q Su, C Peng, N Wang, X Gao - arxiv preprint arxiv:2406.10887, 2024 - arxiv.org
With the great development of generative model techniques, face forgery detection draws
more and more attention in the related field. Researchers find that existing face forgery …

Devil in Shadow: Attacking NIR-VIS Heterogeneous Face Recognition via Adversarial Shadow

D Liu, R Sheng, C Peng, N Wang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Near infrared-visible (NIR-VIS) heterogeneous face recognition aims to match face identities
in cross-modality settings, which has achieved significant development recently. The work …

Enhancing the Transferability of Adversarial Attacks on Face Recognition with Diverse Parameters Augmentation

F Zhou, B Yin, H Ling, Q Zhou, W Wang - arxiv preprint arxiv:2411.15555, 2024 - arxiv.org
Face Recognition (FR) models are vulnerable to adversarial examples that subtly
manipulate benign face images, underscoring the urgent need to improve the transferability …