A comprehensive survey on poisoning attacks and countermeasures in machine learning
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …
training process. Among them, poisoning attacks have become an emerging threat during …
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
On the exploitability of instruction tuning
Instruction tuning is an effective technique to align large language models (LLMs) with
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Data collection and quality challenges in deep learning: A data-centric ai perspective
Data-centric AI is at the center of a fundamental shift in software engineering where machine
learning becomes the new software, powered by big data and computing infrastructure …
learning becomes the new software, powered by big data and computing infrastructure …
Lira: Learnable, imperceptible and robust backdoor attacks
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
Data poisoning attacks against federated learning systems
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …
neural networks in which participants' data remains on their own devices with only model …
Hidden trigger backdoor attacks
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …
attacks to secure deep models in real world applications has become an important research …
Privacy and security issues in deep learning: A survey
Deep Learning (DL) algorithms based on artificial neural networks have achieved
remarkable success and are being extensively applied in a variety of application domains …
remarkable success and are being extensively applied in a variety of application domains …
Backdoor attack with imperceptible input and latent modification
Recent studies have shown that deep neural networks (DNN) are vulnerable to various
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …