Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Backdoorbench: A comprehensive benchmark of backdoor learning
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …
about data privacy, Federated Learning (FL) has been increasingly considered for …
Label poisoning is all you need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
Revisiting the assumption of latent separability for backdoor defenses
Recent studies revealed that deep learning is susceptible to backdoor poisoning attacks. An
adversary can embed a hidden backdoor into a model to manipulate its predictions by only …
adversary can embed a hidden backdoor into a model to manipulate its predictions by only …
Color backdoor: A robust poisoning attack in color space
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …
adversary compromises the integrity of the victim model, causing it to make wrong …
Mm-bd: Post-training detection of backdoor attacks with arbitrary backdoor pattern types using a maximum margin statistic
Backdoor attacks are an important type of adversarial threat against deep neural network
classifiers, wherein test samples from one or more source classes will be (mis) classified to …
classifiers, wherein test samples from one or more source classes will be (mis) classified to …
A survey of neural trojan attacks and defenses in deep learning
Artificial Intelligence (AI) relies heavily on deep learning-a technology that is becoming
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
One loss for quantization: Deep hashing with discrete wasserstein distributional matching
Image hashing is a principled approximate nearest neighbor approach to find similar items
to a query in a large collection of images. Hashing aims to learn a binary-output function that …
to a query in a large collection of images. Hashing aims to learn a binary-output function that …
Iba: Towards irreversible backdoor attacks in federated learning
Federated learning (FL) is a distributed learning approach that enables machine learning
models to be trained on decentralized data without compromising end devices' personal …
models to be trained on decentralized data without compromising end devices' personal …