Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks

DJ Miller, Z **ang, G Kesidis - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
With wide deployment of machine learning (ML)-based systems for a variety of applications
including medical, military, automotive, genomic, multimedia, and social networking, there is …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Anti-backdoor learning: Training clean models on poisoned data

Y Li, X Lyu, N Koren, L Lyu, B Li… - Advances in Neural …, 2021 - proceedings.neurips.cc
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …

Adversarial neuron pruning purifies backdoored deep models

D Wu, Y Wang - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE transactions on neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Color backdoor: A robust poisoning attack in color space

W Jiang, H Li, G Xu, T Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …

Neural attention distillation: Erasing backdoor triggers from deep neural networks

Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma - arxiv preprint arxiv:2101.05930, 2021 - arxiv.org
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …

Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning

J Jia, Y Liu, NZ Gong - 2022 IEEE Symposium on Security and …, 2022 - ieeexplore.ieee.org
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …

Blind backdoors in deep learning models

E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …