Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y **, L Fowl, WR Huang, W Czaja… - arxiv preprint arxiv …, 2020 - arxiv.org
Data Poisoning attacks modify training data to maliciously control a model trained on such
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …

Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch

H Souri, L Fowl, R Chellappa… - Advances in …, 2022 - proceedings.neurips.cc
As the curation of data for machine learning becomes increasingly automated, dataset
tampering is a mounting threat. Backdoor attackers tamper with training data to embed a …