Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Rethinking the trigger of backdoor attack
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …
such that the prediction of the infected model will be maliciously changed if the hidden …
Backdoor defense with machine unlearning
Backdoor injection attack is an emerging threat to the security of neural networks, however,
there still exist limited effective defense methods against the attack. In this paper, we …
there still exist limited effective defense methods against the attack. In this paper, we …
Backdoorl: Backdoor attack against competitive reinforcement learning
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement
learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify …
learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify …
Model poisoning attack in differential privacy-based federated learning
Although federated learning can provide privacy protection for individual raw data, some
studies have shown that the shared parameters or gradients under federated learning may …
studies have shown that the shared parameters or gradients under federated learning may …