Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …
about data privacy, Federated Learning (FL) has been increasingly considered for …
Domain watermark: Effective and harmless dataset copyright protection is closed at hand
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …
datasets, based on which users can evaluate and improve their methods. In this paper, we …
Label poisoning is all you need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …
networks. However recent studies have revealed the feasibility of weaponizing model …
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising
effectiveness in addressing downstream image recognition tasks. However recent works …
effectiveness in addressing downstream image recognition tasks. However recent works …
Black-box dataset ownership verification via backdoor watermarking
Deep learning, especially deep neural networks (DNNs), has been widely and successfully
adopted in many critical applications for its high effectiveness and efficiency. The rapid …
adopted in many critical applications for its high effectiveness and efficiency. The rapid …
Setting the trap: Capturing and defeating backdoors in pretrained language models through honeypots
In the field of natural language processing, the prevalent approach involves fine-tuning
pretrained language models (PLMs) using local samples. Recent research has exposed the …
pretrained language models (PLMs) using local samples. Recent research has exposed the …
Towards reliable and efficient backdoor trigger inversion via decoupling benign features
Recent studies revealed that using third-party models may lead to backdoor threats, where
adversaries can maliciously manipulate model predictions based on backdoors implanted …
adversaries can maliciously manipulate model predictions based on backdoors implanted …
Towards faithful xai evaluation via generalization-limited backdoor watermark
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
Towards Federated Large Language Models: Motivations, Methods, and Future Directions
Large Language Models (LLMs), such as LLaMA and GPT-4, have transformed the
paradigm of natural language comprehension and generation. Despite their impressive …
paradigm of natural language comprehension and generation. Despite their impressive …