A comprehensive survey on poisoning attacks and countermeasures in machine learning
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …
training process. Among them, poisoning attacks have become an emerging threat during …
Security and privacy for artificial intelligence: Opportunities and challenges
The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many
socio-economic and environmental challenges; however, this cannot happen without …
socio-economic and environmental challenges; however, this cannot happen without …
Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning
While recent works have indicated that federated learning (FL) may be vulnerable to
poisoning attacks by compromised clients, their real impact on production FL systems is not …
poisoning attacks by compromised clients, their real impact on production FL systems is not …
Reflection backdoor: A natural backdoor attack on deep neural networks
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
Hidden trigger backdoor attacks
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …
attacks to secure deep models in real world applications has become an important research …
Weight poisoning attacks on pre-trained models
Recently, NLP has seen a surge in the usage of large pre-trained models. Users download
weights of models pre-trained on large datasets, then fine-tune the weights on a task of their …
weights of models pre-trained on large datasets, then fine-tune the weights on a task of their …
Label-consistent backdoor attacks
Deep neural networks have been demonstrated to be vulnerable to backdoor attacks.
Specifically, by injecting a small number of maliciously constructed inputs into the training …
Specifically, by injecting a small number of maliciously constructed inputs into the training …
Manipulating machine learning: Poisoning attacks and countermeasures for regression learning
As machine learning becomes widely used for automated decisions, attackers have strong
incentives to manipulate the results and models generated by machine learning algorithms …
incentives to manipulate the results and models generated by machine learning algorithms …
Certified defenses for data poisoning attacks
Abstract Machine learning systems trained on user-provided data are susceptible to data
poisoning attacks, whereby malicious users inject false training data with the aim of …
poisoning attacks, whereby malicious users inject false training data with the aim of …
Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks
Transferability captures the ability of an attack against a machine-learning model to be
effective against a different, potentially unknown, model. Empirical evidence for …
effective against a different, potentially unknown, model. Empirical evidence for …