Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Trustworthy distributed ai systems: Robustness, privacy, and governance

W Wei, L Liu - ACM Computing Surveys, 2024 - dl.acm.org
Emerging Distributed AI systems are revolutionizing big data computing and data
processing capabilities with growing economic and societal impact. However, recent studies …

Rethinking machine unlearning for large language models

S Liu, Y Yao, J Jia, S Casper, N Baracaldo… - arxiv preprint arxiv …, 2024 - arxiv.org
We explore machine unlearning (MU) in the domain of large language models (LLMs),
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …

Adversarial neuron pruning purifies backdoored deep models

D Wu, Y Wang - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Federated learning with label distribution skew via logits calibration

J Zhang, Z Li, B Li, J Xu, S Wu… - … on Machine Learning, 2022 - proceedings.mlr.press
Traditional federated optimization methods perform poorly with heterogeneous data (ie,
accuracy reduction), especially for highly skewed data. In this paper, we investigate the label …

Backdoorbench: A comprehensive benchmark of backdoor learning

B Wu, H Chen, M Zhang, Z Zhu, S Wei… - Advances in …, 2022 - proceedings.neurips.cc
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …

Privacy and robustness in federated learning: Attacks and defenses

L Lyu, H Yu, X Ma, C Chen, L Sun… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …

Narcissus: A practical clean-label backdoor attack with limited information

Y Zeng, M Pan, HA Just, L Lyu, M Qiu… - Proceedings of the 2023 …, 2023 - dl.acm.org
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …