A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Security and privacy for artificial intelligence: Opportunities and challenges

A Oseni, N Moustafa, H Janicke, P Liu, Z Tari… - arxiv preprint arxiv …, 2021 - arxiv.org
The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many
socio-economic and environmental challenges; however, this cannot happen without …

Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning

V Shejwalkar, A Houmansadr… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
While recent works have indicated that federated learning (FL) may be vulnerable to
poisoning attacks by compromised clients, their real impact on production FL systems is not …

Reflection backdoor: A natural backdoor attack on deep neural networks

Y Liu, X Ma, J Bailey, F Lu - Computer Vision–ECCV 2020: 16th European …, 2020 - Springer
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …

Hidden trigger backdoor attacks

A Saha, A Subramanya, H Pirsiavash - Proceedings of the AAAI …, 2020 - ojs.aaai.org
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …

Weight poisoning attacks on pre-trained models

K Kurita, P Michel, G Neubig - arxiv preprint arxiv:2004.06660, 2020 - arxiv.org
Recently, NLP has seen a surge in the usage of large pre-trained models. Users download
weights of models pre-trained on large datasets, then fine-tune the weights on a task of their …

Label-consistent backdoor attacks

A Turner, D Tsipras, A Madry - arxiv preprint arxiv:1912.02771, 2019 - arxiv.org
Deep neural networks have been demonstrated to be vulnerable to backdoor attacks.
Specifically, by injecting a small number of maliciously constructed inputs into the training …

Manipulating machine learning: Poisoning attacks and countermeasures for regression learning

M Jagielski, A Oprea, B Biggio, C Liu… - … IEEE symposium on …, 2018 - ieeexplore.ieee.org
As machine learning becomes widely used for automated decisions, attackers have strong
incentives to manipulate the results and models generated by machine learning algorithms …

Certified defenses for data poisoning attacks

J Steinhardt, PWW Koh… - Advances in neural …, 2017 - proceedings.neurips.cc
Abstract Machine learning systems trained on user-provided data are susceptible to data
poisoning attacks, whereby malicious users inject false training data with the aim of …

Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks

A Demontis, M Melis, M Pintor, M Jagielski… - 28th USENIX security …, 2019 - usenix.org
Transferability captures the ability of an attack against a machine-learning model to be
effective against a different, potentially unknown, model. Empirical evidence for …