Unique security and privacy threats of large language model: A comprehensive survey

S Wang, T Zhu, B Liu, M Ding, X Guo, D Ye… - arxiv preprint arxiv …, 2024 - arxiv.org
With the rapid development of artificial intelligence, large language models (LLMs) have
made remarkable advancements in natural language processing. These models are trained …

Privacy in large language models: Attacks, defenses and future directions

H Li, Y Chen, J Luo, J Wang, H Peng, Y Kang… - arxiv preprint arxiv …, 2023 - arxiv.org
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …

Heterogeneous lora for federated fine-tuning of on-device foundation models

YJ Cho, L Liu, Z Xu, A Fahrezi… - Proceedings of the 2024 …, 2024 - aclanthology.org
Foundation models (FMs) adapt surprisingly well to downstream tasks with fine-tuning.
However, their colossal parameter space prohibits their training on resource-constrained …

Fedbiot: Llm local fine-tuning in federated learning without full model

F Wu, Z Li, Y Li, B Ding, J Gao - Proceedings of the 30th ACM SIGKDD …, 2024 - dl.acm.org
Large language models (LLMs) show amazing performance on many domain-specific tasks
after fine-tuning with some appropriate data. However, many domain-specific data are …

Federated large language models: Current progress and future directions

Y Yao, J Zhang, J Wu, C Huang, Y **a, T Yu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …

Reusing pretrained models by multi-linear operators for efficient training

Y Pan, Y Yuan, Y Yin, Z Xu, L Shang… - Advances in Neural …, 2023 - proceedings.neurips.cc
Training large models from scratch usually costs a substantial amount of resources. Towards
this problem, recent studies such as bert2BERT and LiGO have reused small pretrained …

Grounding foundation models through federated transfer learning: A general framework

Y Kang, T Fan, H Gu, X Zhang, L Fan… - arxiv preprint arxiv …, 2023 - arxiv.org
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …

Federa: Efficient fine-tuning of language models in federated learning leveraging weight decomposition

Y Yan, Q Yang, S Tang, Z Shi - arxiv preprint arxiv:2404.18848, 2024 - arxiv.org
Despite their exceptional performance on various tasks after fine-tuning, pre-trained
language models (PLMs) face significant challenges due to growing privacy concerns with …

Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes

Z Qin, D Chen, B Qian, B Ding, Y Li, S Deng - arxiv preprint arxiv …, 2023 - arxiv.org
Pre-trained large language models (LLMs) require fine-tuning to improve their
responsiveness to natural language instructions. Federated learning (FL) offers a way to …

Federated fine-tuning of large language models under heterogeneous language tasks and client resources

J Bai, D Chen, B Qian, L Yao, Y Li - arxiv e-prints, 2024 - ui.adsabs.harvard.edu
Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of
Large Language Models (LLMs). While promising, it raises significant challenges due to the …