Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Backdoor attacks and countermeasures on deep learning: A comprehensive review

Y Gao, BG Doan, Z Zhang, S Ma, J Zhang, A Fu… - arxiv preprint arxiv …, 2020 - arxiv.org
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Toward transparent ai: A survey on interpreting the inner structures of deep neural networks

T Räuker, A Ho, S Casper… - 2023 ieee conference …, 2023 - ieeexplore.ieee.org
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …

Narcissus: A practical clean-label backdoor attack with limited information

Y Zeng, M Pan, HA Just, L Lyu, M Qiu… - Proceedings of the 2023 …, 2023 - dl.acm.org
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …

Invisible backdoor attack with sample-specific triggers

Y Li, Y Li, B Wu, L Li, R He… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE transactions on neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Reconstructive neuron pruning for backdoor defense

Y Li, X Lyu, X Ma, N Koren, L Lyu… - … on Machine Learning, 2023 - proceedings.mlr.press
Deep neural networks (DNNs) have been found to be vulnerable to backdoor attacks,
raising security concerns about their deployment in mission-critical applications. While …

Backdoor defense via decoupling the training process

K Huang, Y Li, B Wu, Z Qin, K Ren - arxiv preprint arxiv:2202.03423, 2022 - arxiv.org
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …

Adversarial unlearning of backdoors via implicit hypergradient

Y Zeng, S Chen, W Park, ZM Mao, M **… - arxiv preprint arxiv …, 2021 - arxiv.org
We propose a minimax formulation for removing backdoors from a given poisoned model
based on a small set of clean data. This formulation encompasses much of prior work on …