Challenges in deploying machine learning: a survey of case studies

A Paleyes, RG Urma, ND Lawrence - ACM computing surveys, 2022 - dl.acm.org
In recent years, machine learning has transitioned from a field of academic research interest
to a field capable of solving real-world business problems. However, the deployment of …

A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Adversarial examples in the physical world

A Kurakin, IJ Goodfellow, S Bengio - Artificial intelligence safety …, 2018 - taylorfrancis.com
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An
adversarial example is a sample of input data which has been modified very slightly in a way …

Data poisoning attacks against federated learning systems

V Tolpegin, S Truex, ME Gursoy, L Liu - … 14–18, 2020, proceedings, part i …, 2020 - Springer
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …

Fltrust: Byzantine-robust federated learning via trust bootstrap**

X Cao, M Fang, J Liu, NZ Gong - arxiv preprint arxiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Local model poisoning attacks to {Byzantine-Robust} federated learning

M Fang, X Cao, J Jia, N Gong - 29th USENIX security symposium …, 2020 - usenix.org
In federated learning, multiple client devices jointly learn a machine learning model: each
client device maintains a local model for its local training dataset, while a master device …

Wild patterns: Ten years after the rise of adversarial machine learning

B Biggio, F Roli - Proceedings of the 2018 ACM SIGSAC Conference on …, 2018 - dl.acm.org
Deep neural networks and machine-learning algorithms are pervasively used in several
applications, ranging from computer vision to computer security. In most of these …

Poison frogs! targeted clean-label poisoning attacks on neural networks

A Shafahi, WR Huang, M Najibi… - Advances in neural …, 2018 - proceedings.neurips.cc
Data poisoning is an attack on machine learning models wherein the attacker adds
examples to the training set to manipulate the behavior of the model at test time. This paper …

Machine unlearning

L Bourtoule, V Chandrasekaran… - … IEEE Symposium on …, 2021 - ieeexplore.ieee.org
Once users have shared their data online, it is generally difficult for them to revoke access
and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because …

Fine-pruning: Defending against backdooring attacks on deep neural networks

K Liu, B Dolan-Gavitt, S Garg - … on research in attacks, intrusions, and …, 2018 - Springer
Deep neural networks (DNNs) provide excellent performance across a wide range of
classification tasks, but their training requires high computational resources and is often …