Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …
about data privacy, Federated Learning (FL) has been increasingly considered for …
Cleanclip: Mitigating data poisoning attacks in multimodal contrastive learning
Multimodal contrastive pretraining has been used to train multimodal representation models,
such as CLIP, on large amounts of paired image-text data. However, previous studies have …
such as CLIP, on large amounts of paired image-text data. However, previous studies have …
Rethinking backdoor attacks
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a
training set to make the resulting model vulnerable to manipulation. Defending against such …
training set to make the resulting model vulnerable to manipulation. Defending against such …
Friendly noise against adversarial noise: a powerful defense against data poisoning attack
A powerful category of (invisible) data poisoning attacks modify a subset of training
examples by small adversarial perturbations to change the prediction of certain test-time …
examples by small adversarial perturbations to change the prediction of certain test-time …
Towards understanding and enhancing robustness of deep learning models against malicious unlearning attacks
Given the availability of abundant data, deep learning models have been advanced and
become ubiquitous in the past decade. In practice, due to many different reasons (eg …
become ubiquitous in the past decade. In practice, due to many different reasons (eg …
Ntd: Non-transferability enabled deep learning backdoor detection
To mitigate recent insidious backdoor attacks on deep learning models, advances have
been made by the research community. Nonetheless, state-of-the-art defenses are either …
been made by the research community. Nonetheless, state-of-the-art defenses are either …
Computation and data efficient backdoor attacks
Backdoor attacks against deep learning have been widely studied. Various attack
techniques have been proposed for different domains and paradigms, eg, image, point …
techniques have been proposed for different domains and paradigms, eg, image, point …
Hidden poison: Machine unlearning enables camouflaged poisoning attacks
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the
context of machine unlearning and other settings when model retraining may be induced. An …
context of machine unlearning and other settings when model retraining may be induced. An …
[PDF][PDF] Harmonycloak: Making music unlearnable for generative ai
Recent advances in generative AI have significantly expanded into the realms of art and
music. This development has opened up a vast realm of possibilities, pushing the …
music. This development has opened up a vast realm of possibilities, pushing the …