Domain watermark: Effective and harmless dataset copyright protection is closed at hand
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …
datasets, based on which users can evaluate and improve their methods. In this paper, we …
Label poisoning is all you need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
What can discriminator do? towards box-free ownership verification of generative adversarial networks
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …
achieved unprecedented success in image synthesis. However, well-trained GANs are …
Promptcare: Prompt copyright protection by watermark injection and verification
Large language models (LLMs) have witnessed a meteoric rise in popularity among the
general public users over the past few months, facilitating diverse downstream tasks with …
general public users over the past few months, facilitating diverse downstream tasks with …
Towards reliable and efficient backdoor trigger inversion via decoupling benign features
Recent studies revealed that using third-party models may lead to backdoor threats, where
adversaries can maliciously manipulate model predictions based on backdoors implanted …
adversaries can maliciously manipulate model predictions based on backdoors implanted …
Towards faithful xai evaluation via generalization-limited backdoor watermark
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions
O Mengara, A Avila, TH Falk - IEEE Access, 2024 - ieeexplore.ieee.org
Deep neural network (DNN) classifiers are potent instruments that can be used in various
security-sensitive applications. Nonetheless, they are vulnerable to certain attacks that …
security-sensitive applications. Nonetheless, they are vulnerable to certain attacks that …
Flowmur: A stealthy and practical audio backdoor attack with limited knowledge
Speech recognition systems driven by Deep Neural Networks (DNNs) have revolutionized
human-computer interaction through voice interfaces, which significantly facilitate our daily …
human-computer interaction through voice interfaces, which significantly facilitate our daily …
Towards stealthy backdoor attacks against speech recognition via elements of sound
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in
various applications of speech recognition. Recently, a few works revealed that these …
various applications of speech recognition. Recently, a few works revealed that these …
Poisoned forgery face: Towards backdoor attacks on face forgery detection
The proliferation of face forgery techniques has raised significant concerns within society,
thereby motivating the development of face forgery detection methods. These methods aim …
thereby motivating the development of face forgery detection methods. These methods aim …