Federated learning for generalization, robustness, fairness: A survey and benchmark

W Huang, M Ye, Z Shi, G Wan, H Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of federated learning, an …

Defending against weight-poisoning backdoor attacks for parameter-efficient fine-tuning

S Zhao, L Gan, LA Tuan, J Fu, L Lyu, M Jia… - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, various parameter-efficient fine-tuning (PEFT) strategies for application to
language models have been proposed and successfully implemented. However, this raises …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

Anti-Backdoor Model: A Novel Algorithm To Remove Backdoors in a Non-invasive Way

C Chen, H Hong, T **ang, M **e - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Recent research findings suggest that machine learning models are highly susceptible to
backdoor poisoning attacks. Backdoor poisoning attacks can be easily executed and …

Fisher information guided purification against backdoor attacks

N Karim, A Al Arafat, AS Rakin, Z Guo… - Proceedings of the 2024 …, 2024 - dl.acm.org
Studies on backdoor attacks in recent years suggest that an adversary can compromise the
integrity of a deep neural network (DNN) by manipulating a small set of training samples …

Mitigating modality prior-induced hallucinations in multimodal large language models via deciphering attention causality

G Zhou, Y Yan, X Zou, K Wang, A Liu, X Hu - arxiv preprint arxiv …, 2024 - arxiv.org
Multimodal Large Language Models (MLLMs) have emerged as a central focus in both
industry and academia, but often suffer from biases introduced by visual and language …

Backdoor Attack and Defense on Deep Learning: A Survey

Y Bai, G **ng, H Wu, Z Rao, C Ma… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Deep learning, as an important branch of machine learning, has been widely applied in
computer vision, natural language processing, speech recognition, and more. However …

Augmented Neural Fine-Tuning for Efficient Backdoor Purification

N Karim, AA Arafat, U Khalid, Z Guo… - European Conference on …, 2024 - Springer
Recent studies have revealed the vulnerability of deep neural networks (DNNs) to various
backdoor attacks, where the behavior of DNNs can be compromised by utilizing certain …

Ufid: A unified framework for input-level backdoor detection on diffusion models

Z Guan, M Hu, S Li, A Vullikanti - arxiv preprint arxiv:2404.01101, 2024 - arxiv.org
Diffusion Models are vulnerable to backdoor attacks, where malicious attackers inject
backdoors by poisoning some parts of the training samples during the training stage. This …

Flatness-Aware Sequential Learning Generates Resilient Backdoors

H Pham, TA Ta, A Tran, KD Doan - European Conference on Computer …, 2024 - Springer
Recently, backdoor attacks have become an emerging threat to the security of machine
learning models. From the adversary's perspective, the implanted backdoors should be …