Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Poisoning web-scale training datasets is practical
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
Trustworthy distributed ai systems: Robustness, privacy, and governance
Emerging Distributed AI systems are revolutionizing big data computing and data
processing capabilities with growing economic and societal impact. However, recent studies …
processing capabilities with growing economic and societal impact. However, recent studies …
Rethinking machine unlearning for large language models
We explore machine unlearning (MU) in the domain of large language models (LLMs),
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …
Adversarial neuron pruning purifies backdoored deep models
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …
resources become huge, which makes outsourcing training more popular. Training in a third …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Federated learning with label distribution skew via logits calibration
Traditional federated optimization methods perform poorly with heterogeneous data (ie,
accuracy reduction), especially for highly skewed data. In this paper, we investigate the label …
accuracy reduction), especially for highly skewed data. In this paper, we investigate the label …
Backdoorbench: A comprehensive benchmark of backdoor learning
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
Privacy and robustness in federated learning: Attacks and defenses
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
Narcissus: A practical clean-label backdoor attack with limited information
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
causing the model to misclassify inputs with a trigger during testing to achieve a desired …