Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Backdoorbench: A comprehensive benchmark of backdoor learning

B Wu, H Chen, M Zhang, Z Zhu, S Wei… - Advances in …, 2022 - proceedings.neurips.cc
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …

Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y **ang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

Revisiting the assumption of latent separability for backdoor defenses

X Qi, T **e, Y Li, S Mahloujifar… - The eleventh international …, 2023 - openreview.net
Recent studies revealed that deep learning is susceptible to backdoor poisoning attacks. An
adversary can embed a hidden backdoor into a model to manipulate its predictions by only …

Color backdoor: A robust poisoning attack in color space

W Jiang, H Li, G Xu, T Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …

Mm-bd: Post-training detection of backdoor attacks with arbitrary backdoor pattern types using a maximum margin statistic

H Wang, Z **ang, DJ Miller… - 2024 IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Backdoor attacks are an important type of adversarial threat against deep neural network
classifiers, wherein test samples from one or more source classes will be (mis) classified to …

A survey of neural trojan attacks and defenses in deep learning

J Wang, GM Hassan, N Akhtar - arxiv preprint arxiv:2202.07183, 2022 - arxiv.org
Artificial Intelligence (AI) relies heavily on deep learning-a technology that is becoming
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …

One loss for quantization: Deep hashing with discrete wasserstein distributional matching

KD Doan, P Yang, P Li - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Image hashing is a principled approximate nearest neighbor approach to find similar items
to a query in a large collection of images. Hashing aims to learn a binary-output function that …

Iba: Towards irreversible backdoor attacks in federated learning

TD Nguyen, TA Nguyen, A Tran… - Advances in Neural …, 2024 - proceedings.neurips.cc
Federated learning (FL) is a distributed learning approach that enables machine learning
models to be trained on decentralized data without compromising end devices' personal …