Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST **a… - Advances in Neural …, 2024 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …

Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

What can discriminator do? towards box-free ownership verification of generative adversarial networks

Z Huang, B Li, Y Cai, R Wang, S Guo… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …

Promptcare: Prompt copyright protection by watermark injection and verification

H Yao, J Lou, Z Qin, K Ren - 2024 IEEE Symposium on Security …, 2024 - ieeexplore.ieee.org
Large language models (LLMs) have witnessed a meteoric rise in popularity among the
general public users over the past few months, facilitating diverse downstream tasks with …

Towards reliable and efficient backdoor trigger inversion via decoupling benign features

X Xu, K Huang, Y Li, Z Qin, K Ren - The Twelfth International …, 2024 - openreview.net
Recent studies revealed that using third-party models may lead to backdoor threats, where
adversaries can maliciously manipulate model predictions based on backdoors implanted …

Towards faithful xai evaluation via generalization-limited backdoor watermark

M Ya, Y Li, T Dai, B Wang, Y Jiang… - The Twelfth International …, 2023 - openreview.net
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …

Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions

O Mengara, A Avila, TH Falk - IEEE Access, 2024 - ieeexplore.ieee.org
Deep neural network (DNN) classifiers are potent instruments that can be used in various
security-sensitive applications. Nonetheless, they are vulnerable to certain attacks that …

Flowmur: A stealthy and practical audio backdoor attack with limited knowledge

J Lan, J Wang, B Yan, Z Yan… - 2024 IEEE Symposium …, 2024 - ieeexplore.ieee.org
Speech recognition systems driven by Deep Neural Networks (DNNs) have revolutionized
human-computer interaction through voice interfaces, which significantly facilitate our daily …

Towards stealthy backdoor attacks against speech recognition via elements of sound

H Cai, P Zhang, H Dong, Y **ao… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in
various applications of speech recognition. Recently, a few works revealed that these …

Poisoned forgery face: Towards backdoor attacks on face forgery detection

J Liang, S Liang, A Liu, X Jia, J Kuang… - arxiv preprint arxiv …, 2024 - arxiv.org
The proliferation of face forgery techniques has raised significant concerns within society,
thereby motivating the development of face forgery detection methods. These methods aim …