Physical adversarial attack meets computer vision: A decade survey

H Wei, H Tang, X Jia, Z Wang, H Yu, Z Li… - … on Pattern Analysis …, 2024‏ - ieeexplore.ieee.org
Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision,
their vulnerability to adversarial attacks remains a critical concern. Extensive research has …

Backdoor attacks and defenses targeting multi-domain ai models: A comprehensive review

S Zhang, Y Pan, Q Liu, Z Yan, KKR Choo… - ACM Computing …, 2024‏ - dl.acm.org
Since the emergence of security concerns in artificial intelligence (AI), there has been
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE Transactions on Neural …, 2022‏ - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Color backdoor: A robust poisoning attack in color space

W Jiang, H Li, G Xu, T Zhang - Proceedings of the IEEE/CVF …, 2023‏ - openaccess.thecvf.com
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …

Backdoor defense via adaptively splitting poisoned dataset

K Gao, Y Bai, J Gu, Y Yang… - Proceedings of the IEEE …, 2023‏ - openaccess.thecvf.com
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …

Fiba: Frequency-injection based backdoor attack in medical image analysis

Y Feng, B Ma, J Zhang, S Zhao… - Proceedings of the …, 2022‏ - openaccess.thecvf.com
In recent years, the security of AI systems has drawn increasing research attention,
especially in the medical imaging realm. To develop a secure medical image analysis (MIA) …

Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch

H Souri, L Fowl, R Chellappa… - Advances in …, 2022‏ - proceedings.neurips.cc
As the curation of data for machine learning becomes increasingly automated, dataset
tampering is a mounting threat. Backdoor attackers tamper with training data to embed a …

Distribution preserving backdoor attack in self-supervised learning

G Tao, Z Wang, S Feng, G Shen, S Ma… - 2024 IEEE Symposium …, 2024‏ - ieeexplore.ieee.org
Self-supervised learning is widely used in various domains for building foundation models. It
has been demonstrated to achieve state-of-the-art performance in a range of tasks. In the …

Prompt-specific poisoning attacks on text-to-image generative models

S Shan, W Ding, J Passananti, H Zheng… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …

Text-to-image diffusion models can be easily backdoored through multimodal data poisoning

S Zhai, Y Dong, Q Shen, S Pu, Y Fang… - Proceedings of the 31st …, 2023‏ - dl.acm.org
With the help of conditioning mechanisms, the state-of-the-art diffusion models have
achieved tremendous success in guided image generation, particularly in text-to-image …