Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y **ang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST **a… - Advances in Neural …, 2024 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …

Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks

B Li, Y Cai, H Li, F Xue, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …

BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP

J Bai, K Gao, S Min, ST **a, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising
effectiveness in addressing downstream image recognition tasks. However recent works …

Black-box dataset ownership verification via backdoor watermarking

Y Li, M Zhu, X Yang, Y Jiang, T Wei… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep learning, especially deep neural networks (DNNs), has been widely and successfully
adopted in many critical applications for its high effectiveness and efficiency. The rapid …

Setting the trap: Capturing and defeating backdoors in pretrained language models through honeypots

RR Tang, J Yuan, Y Li, Z Liu… - Advances in Neural …, 2023 - proceedings.neurips.cc
In the field of natural language processing, the prevalent approach involves fine-tuning
pretrained language models (PLMs) using local samples. Recent research has exposed the …

Towards reliable and efficient backdoor trigger inversion via decoupling benign features

X Xu, K Huang, Y Li, Z Qin, K Ren - The Twelfth International …, 2024 - openreview.net
Recent studies revealed that using third-party models may lead to backdoor threats, where
adversaries can maliciously manipulate model predictions based on backdoors implanted …

Towards faithful xai evaluation via generalization-limited backdoor watermark

M Ya, Y Li, T Dai, B Wang, Y Jiang… - The Twelfth International …, 2023 - openreview.net
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …

Towards Federated Large Language Models: Motivations, Methods, and Future Directions

Y Cheng, W Zhang, Z Zhang, C Zhang… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Large Language Models (LLMs), such as LLaMA and GPT-4, have transformed the
paradigm of natural language comprehension and generation. Despite their impressive …