Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

A taxonomy and survey of attacks against machine learning

N Pitropakis, E Panaousis, T Giannetsos… - Computer Science …, 2019 - Elsevier
The majority of machine learning methodologies operate with the assumption that their
environment is benign. However, this assumption does not always hold, as it is often …

Data poisoning attacks against federated learning systems

V Tolpegin, S Truex, ME Gursoy, L Liu - … 14–18, 2020, proceedings, part i …, 2020 - Springer
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …

Wild patterns: Ten years after the rise of adversarial machine learning

B Biggio, F Roli - Proceedings of the 2018 ACM SIGSAC Conference on …, 2018 - dl.acm.org
Deep neural networks and machine-learning algorithms are pervasively used in several
applications, ranging from computer vision to computer security. In most of these …

Poisoning and backdooring contrastive learning

N Carlini, A Terzis - arxiv preprint arxiv:2106.09667, 2021 - arxiv.org
Multimodal contrastive learning methods like CLIP train on noisy and uncurated training
datasets. This is cheaper than labeling datasets manually, and even improves out-of …

A backdoor attack against lstm-based text classification systems

J Dai, C Chen, Y Li - IEEE Access, 2019 - ieeexplore.ieee.org
With the widespread use of deep learning system in many applications, the adversary has
strong incentive to explore vulnerabilities of deep neural networks and manipulate them …

Certified defenses for data poisoning attacks

J Steinhardt, PWW Koh… - Advances in neural …, 2017 - proceedings.neurips.cc
Abstract Machine learning systems trained on user-provided data are susceptible to data
poisoning attacks, whereby malicious users inject false training data with the aim of …

Towards poisoning of deep learning algorithms with back-gradient optimization

L Muñoz-González, B Biggio, A Demontis… - Proceedings of the 10th …, 2017 - dl.acm.org
A number of online services nowadays rely upon machine learning to extract valuable
information from data collected in the wild. This exposes learning algorithms to the threat of …

Hardware and software optimizations for accelerating deep neural networks: Survey of current trends, challenges, and the road ahead

M Capra, B Bussolino, A Marchisio, G Masera… - IEEE …, 2020 - ieeexplore.ieee.org
Currently, Machine Learning (ML) is becoming ubiquitous in everyday life. Deep Learning
(DL) is already present in many applications ranging from computer vision for medicine to …

A survey on security threats and defensive techniques of machine learning: A data driven view

Q Liu, P Li, W Zhao, W Cai, S Yu, VCM Leung - IEEE access, 2018 - ieeexplore.ieee.org
Machine learning is one of the most prevailing techniques in computer science, and it has
been widely applied in image processing, natural language processing, pattern recognition …