Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Narcissus: A practical clean-label backdoor attack with limited information

Y Zeng, M Pan, HA Just, L Lyu, M Qiu… - Proceedings of the 2023 …, 2023 - dl.acm.org
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …

Backdoorbench: A comprehensive benchmark of backdoor learning

B Wu, H Chen, M Zhang, Z Zhu, S Wei… - Advances in …, 2022 - proceedings.neurips.cc
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE transactions on neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST **a… - Advances in Neural …, 2023 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …

Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection

Y Li, Y Bai, Y Jiang, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …

Shared adversarial unlearning: Backdoor mitigation by unlearning shared adversarial examples

S Wei, M Zhang, H Zha, B Wu - Advances in Neural …, 2023 - proceedings.neurips.cc
Backdoor attacks are serious security threats to machine learning models where an
adversary can inject poisoned samples into the training set, causing a backdoored model …

Adversarial examples make strong poisons

L Fowl, M Goldblum, P Chiang… - Advances in …, 2021 - proceedings.neurips.cc
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …

A comprehensive survey on backdoor attacks and their defenses in face recognition systems

Q Le Roux, E Bourbao, Y Teglia, K Kallas - IEEE Access, 2024 - ieeexplore.ieee.org
Deep learning has significantly transformed face recognition, enabling the deployment of
large-scale, state-of-the-art solutions worldwide. However, the widespread adoption of deep …