Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Rethinking the trigger of backdoor attack

Y Li, T Zhai, B Wu, Y Jiang, Z Li, S **a - arxiv preprint arxiv:2004.04692, 2020 - arxiv.org
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …

Backdoor defense with machine unlearning

Y Liu, M Fan, C Chen, X Liu, Z Ma… - IEEE INFOCOM 2022 …, 2022 - ieeexplore.ieee.org
Backdoor injection attack is an emerging threat to the security of neural networks, however,
there still exist limited effective defense methods against the attack. In this paper, we …

Backdoorl: Backdoor attack against competitive reinforcement learning

L Wang, Z Javed, X Wu, W Guo, X **ng… - arxiv preprint arxiv …, 2021 - arxiv.org
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement
learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify …

Model poisoning attack in differential privacy-based federated learning

M Yang, H Cheng, F Chen, X Liu, M Wang, X Li - Information Sciences, 2023 - Elsevier
Although federated learning can provide privacy protection for individual raw data, some
studies have shown that the shared parameters or gradients under federated learning may …