When machine learning meets privacy: A survey and outlook

B Liu, M Ding, S Shaham, W Rahayu… - ACM Computing …, 2021‏ - dl.acm.org
The newly emerged machine learning (eg, deep learning) methods have become a strong
driving force to revolutionize a wide range of industries, such as smart healthcare, financial …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023‏ - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024‏ - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Anti-backdoor learning: Training clean models on poisoned data

Y Li, X Lyu, N Koren, L Lyu, B Li… - Advances in Neural …, 2021‏ - proceedings.neurips.cc
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …

Lira: Learnable, imperceptible and robust backdoor attacks

K Doan, Y Lao, W Zhao, P Li - Proceedings of the IEEE/CVF …, 2021‏ - openaccess.thecvf.com
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **-based backdoor attack
A Nguyen, A Tran - arxiv preprint arxiv:2102.10369, 2021‏ - arxiv.org
With the thriving of deep learning and the widespread practice of using pre-trained networks,
backdoor attacks have become an increasing security threat drawing many research …

Reflection backdoor: A natural backdoor attack on deep neural networks

Y Liu, X Ma, J Bailey, F Lu - Computer Vision–ECCV 2020: 16th European …, 2020‏ - Springer
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …

Neural attention distillation: Erasing backdoor triggers from deep neural networks

Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma - arxiv preprint arxiv:2101.05930, 2021‏ - arxiv.org
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …

Input-aware dynamic backdoor attack

TA Nguyen, A Tran - Advances in Neural Information …, 2020‏ - proceedings.neurips.cc
In recent years, neural backdoor attack has been considered to be a potential security threat
to deep learning systems. Such systems, while achieving the state-of-the-art performance on …