Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

A systematic survey of prompt engineering on vision-language foundation models

J Gu, Z Han, S Chen, A Beirami, B He, G Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Prompt engineering is a technique that involves augmenting a large pre-trained model with
task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be …

Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

Backdoorbench: A comprehensive benchmark of backdoor learning

B Wu, H Chen, M Zhang, Z Zhu, S Wei… - Advances in …, 2022 - proceedings.neurips.cc
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE transactions on neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Reconstructive neuron pruning for backdoor defense

Y Li, X Lyu, X Ma, N Koren, L Lyu… - … on Machine Learning, 2023 - proceedings.mlr.press
Deep neural networks (DNNs) have been found to be vulnerable to backdoor attacks,
raising security concerns about their deployment in mission-critical applications. While …

Data-free backdoor removal based on channel lipschitzness

R Zheng, R Tang, J Li, L Liu - European Conference on Computer Vision, 2022 - Springer
Recent studies have shown that Deep Neural Networks (DNNs) are vulnerable to the
backdoor attacks, which leads to malicious behaviors of DNNs when specific triggers are …

Enhancing fine-tuning based backdoor defense with sharpness-aware minimization

M Zhu, S Wei, L Shen, Y Fan… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced
by attackers, is becoming increasingly critical for machine learning security and integrity …

Adversarial unlearning of backdoors via implicit hypergradient

Y Zeng, S Chen, W Park, ZM Mao, M **… - arxiv preprint arxiv …, 2021 - arxiv.org
We propose a minimax formulation for removing backdoors from a given poisoned model
based on a small set of clean data. This formulation encompasses much of prior work on …

Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST **a… - Advances in Neural …, 2023 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …