Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y **ang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

Cleanclip: Mitigating data poisoning attacks in multimodal contrastive learning

H Bansal, N Singhi, Y Yang, F Yin… - Proceedings of the …, 2023 - openaccess.thecvf.com
Multimodal contrastive pretraining has been used to train multimodal representation models,
such as CLIP, on large amounts of paired image-text data. However, previous studies have …

Rethinking backdoor attacks

A Khaddaj, G Leclerc, A Makelov… - International …, 2023 - proceedings.mlr.press
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a
training set to make the resulting model vulnerable to manipulation. Defending against such …

Friendly noise against adversarial noise: a powerful defense against data poisoning attack

TY Liu, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
A powerful category of (invisible) data poisoning attacks modify a subset of training
examples by small adversarial perturbations to change the prediction of certain test-time …

Towards understanding and enhancing robustness of deep learning models against malicious unlearning attacks

W Qian, C Zhao, W Le, M Ma, M Huai - Proceedings of the 29th ACM …, 2023 - dl.acm.org
Given the availability of abundant data, deep learning models have been advanced and
become ubiquitous in the past decade. In practice, due to many different reasons (eg …

Ntd: Non-transferability enabled deep learning backdoor detection

Y Li, H Ma, Z Zhang, Y Gao, A Abuadbba… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
To mitigate recent insidious backdoor attacks on deep learning models, advances have
been made by the research community. Nonetheless, state-of-the-art defenses are either …

Computation and data efficient backdoor attacks

Y Wu, X Han, H Qiu, T Zhang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor attacks against deep learning have been widely studied. Various attack
techniques have been proposed for different domains and paradigms, eg, image, point …

Hidden poison: Machine unlearning enables camouflaged poisoning attacks

JZ Di, J Douglas, J Acharya, G Kamath… - NeurIPS ML Safety …, 2022 - openreview.net
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the
context of machine unlearning and other settings when model retraining may be induced. An …

[PDF][PDF] Harmonycloak: Making music unlearnable for generative ai

SIA Meerza, J Liu, L Sun - 2025 IEEE Symposium on Security …, 2024 - mosis.eecs.utk.edu
Recent advances in generative AI have significantly expanded into the realms of art and
music. This development has opened up a vast realm of possibilities, pushing the …