A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Wild patterns: Ten years after the rise of adversarial machine learning

B Biggio, F Roli - Proceedings of the 2018 ACM SIGSAC Conference on …, 2018 - dl.acm.org
Deep neural networks and machine-learning algorithms are pervasively used in several
applications, ranging from computer vision to computer security. In most of these …

Trojaning attack on neural networks

Y Liu, S Ma, Y Aafer, WC Lee… - 25th Annual …, 2018 - scholarship.libraries.rutgers.edu
Trojaning attack on neural networks Page 1 Please do not remove this page Trojaning
attack on neural networks Liu, Yingqi; Ma, Shiqing; Aafer, Yousra; et.al. https://scholarship.libraries.rutgers.edu/esploro/outputs/conferencePaper/Trojaning-attack-on-neural-networks/991031794682704646/filesAndLinks …

Certified defenses against adversarial examples

A Raghunathan, J Steinhardt, P Liang - arxiv preprint arxiv:1801.09344, 2018 - arxiv.org
While neural networks have achieved high accuracy on standard image classification
benchmarks, their accuracy drops to nearly zero in the presence of small adversarial …

Machine learning in cybersecurity: a comprehensive survey

D Dasgupta, Z Akhtar, S Sen - The Journal of Defense …, 2022 - journals.sagepub.com
Today's world is highly network interconnected owing to the pervasiveness of small personal
devices (eg, smartphones) as well as large computing devices or services (eg, cloud …

Certified defenses for data poisoning attacks

J Steinhardt, PWW Koh… - Advances in neural …, 2017 - proceedings.neurips.cc
Abstract Machine learning systems trained on user-provided data are susceptible to data
poisoning attacks, whereby malicious users inject false training data with the aim of …

The limitations of deep learning in adversarial settings

N Papernot, P McDaniel, S Jha… - 2016 IEEE European …, 2016 - ieeexplore.ieee.org
Deep learning takes advantage of large datasets and computationally efficient training
algorithms to outperform other approaches at various machine learning tasks. However …

Distillation as a defense to adversarial perturbations against deep neural networks

N Papernot, P McDaniel, X Wu, S Jha… - 2016 IEEE symposium …, 2016 - ieeexplore.ieee.org
Deep learning algorithms have been shown to perform extremely well on many classical
machine learning problems. However, recent studies have shown that deep learning, like …

Sok: Security and privacy in machine learning

N Papernot, P McDaniel, A Sinha… - 2018 IEEE European …, 2018 - ieeexplore.ieee.org
Advances in machine learning (ML) in recent years have enabled a dizzying array of
applications such as data analytics, autonomous systems, and security diagnostics. ML is …