Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Ai agents under threat: A survey of key security challenges and future pathways

Z Deng, Y Guo, C Han, W Ma, J **ong, S Wen… - ACM Computing …, 2024 - dl.acm.org
An Artificial Intelligence (AI) agent is a software entity that autonomously performs tasks or
makes decisions based on pre-defined objectives and data inputs. AI agents, capable of …

Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning

S Liang, M Zhu, A Liu, B Wu, X Cao… - Proceedings of the …, 2024 - openaccess.thecvf.com
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …

{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection

A Liu, J Guo, J Wang, S Liang, R Tao, W Zhou… - 32nd USENIX Security …, 2023 - usenix.org
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …

Dual attention suppression attack: Generate adversarial camouflage in physical world

J Wang, A Liu, Z Yin, S Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep learning models are vulnerable to adversarial examples. As a more threatening type
for practical deep learning systems, physical adversarial examples have received extensive …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Vl-trojan: Multimodal instruction backdoor attacks against autoregressive visual language models

J Liang, S Liang, A Liu, X Cao - International Journal of Computer Vision, 2025 - Springer
Abstract Autoregressive Visual Language Models (VLMs) demonstrate remarkable few-shot
learning capabilities within a multimodal context. Recently, multimodal instruction tuning has …

Exploring the relationship between architectural design and adversarially robust generalization

A Liu, S Tang, S Liang, R Gong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial training has been demonstrated to be one of the most effective remedies for
defending adversarial examples, yet it often suffers from the huge robustness generalization …

Patch-wise attack for fooling deep neural network

L Gao, Q Zhang, J Song, X Liu, HT Shen - Computer Vision–ECCV 2020 …, 2020 - Springer
By adding human-imperceptible noise to clean images, the resultant adversarial examples
can fool other unknown models. Features of a pixel extracted by deep neural networks …

Bias-based universal adversarial patch attack for automatic check-out

A Liu, J Wang, X Liu, B Cao, C Zhang, H Yu - Computer Vision–ECCV …, 2020 - Springer
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …