Physical adversarial attack meets computer vision: A decade survey
Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision,
their vulnerability to adversarial attacks remains a critical concern. Extensive research has …
their vulnerability to adversarial attacks remains a critical concern. Extensive research has …
Backdoor attacks and defenses targeting multi-domain ai models: A comprehensive review
Since the emergence of security concerns in artificial intelligence (AI), there has been
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Color backdoor: A robust poisoning attack in color space
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …
adversary compromises the integrity of the victim model, causing it to make wrong …
Backdoor defense via adaptively splitting poisoned dataset
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …
Fiba: Frequency-injection based backdoor attack in medical image analysis
In recent years, the security of AI systems has drawn increasing research attention,
especially in the medical imaging realm. To develop a secure medical image analysis (MIA) …
especially in the medical imaging realm. To develop a secure medical image analysis (MIA) …
Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch
H Souri, L Fowl, R Chellappa… - Advances in …, 2022 - proceedings.neurips.cc
As the curation of data for machine learning becomes increasingly automated, dataset
tampering is a mounting threat. Backdoor attackers tamper with training data to embed a …
tampering is a mounting threat. Backdoor attackers tamper with training data to embed a …
Distribution preserving backdoor attack in self-supervised learning
Self-supervised learning is widely used in various domains for building foundation models. It
has been demonstrated to achieve state-of-the-art performance in a range of tasks. In the …
has been demonstrated to achieve state-of-the-art performance in a range of tasks. In the …
Prompt-specific poisoning attacks on text-to-image generative models
S Shan, W Ding, J Passananti, H Zheng… - arxiv preprint arxiv …, 2023 - arxiv.org
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …
machine learning models at training time. For text-to-image generative models with massive …
Text-to-image diffusion models can be easily backdoored through multimodal data poisoning
With the help of conditioning mechanisms, the state-of-the-art diffusion models have
achieved tremendous success in guided image generation, particularly in text-to-image …
achieved tremendous success in guided image generation, particularly in text-to-image …