A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning

V Shejwalkar, A Houmansadr… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
While recent works have indicated that federated learning (FL) may be vulnerable to
poisoning attacks by compromised clients, their real impact on production FL systems is not …

RETRACTED: SVM‐based generative adverserial networks for federated learning and edge computing attack model and outpoising

P Manoharan, R Walia, C Iwendi, TA Ahanger… - Expert …, 2023 - Wiley Online Library
Abstract Machine learning are vulnerable to the threats. The Intruders can utilize the
malicious nature of the nodes to attack the training dataset to worsen the process and …

Data poisoning attacks against federated learning systems

V Tolpegin, S Truex, ME Gursoy, L Liu - … 14–18, 2020, proceedings, part i …, 2020 - Springer
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …

Detecting backdoor attacks on deep neural networks by activation clustering

B Chen, W Carvalho, N Baracaldo, H Ludwig… - arxiv preprint arxiv …, 2018 - arxiv.org
While machine learning (ML) models are being increasingly trusted to make decisions in
different and varying areas, the safety of systems using such models has become an …

Privacy and security issues in deep learning: A survey

X Liu, L **e, Y Wang, J Zou, J **ong, Z Ying… - IEEE …, 2020 - ieeexplore.ieee.org
Deep Learning (DL) algorithms based on artificial neural networks have achieved
remarkable success and are being extensively applied in a variety of application domains …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C **e, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Poison frogs! targeted clean-label poisoning attacks on neural networks

A Shafahi, WR Huang, M Najibi… - Advances in neural …, 2018 - proceedings.neurips.cc
Data poisoning is an attack on machine learning models wherein the attacker adds
examples to the training set to manipulate the behavior of the model at test time. This paper …

Evaluating differentially private machine learning in practice

B Jayaraman, D Evans - 28th USENIX Security Symposium (USENIX …, 2019 - usenix.org
Differential privacy is a strong notion for privacy that can be used to prove formal
guarantees, in terms of a privacy budget, ε, about how much information is leaked by a …