A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

Heterogeneous lora for federated fine-tuning of on-device foundation models

YJ Cho, L Liu, Z Xu, A Fahrezi… - Proceedings of the 2024 …, 2024 - aclanthology.org
Foundation models (FMs) adapt surprisingly well to downstream tasks with fine-tuning.
However, their colossal parameter space prohibits their training on resource-constrained …

Fedbiot: Llm local fine-tuning in federated learning without full model

F Wu, Z Li, Y Li, B Ding, J Gao - Proceedings of the 30th ACM SIGKDD …, 2024 - dl.acm.org
Large language models (LLMs) show amazing performance on many domain-specific tasks
after fine-tuning with some appropriate data. However, many domain-specific data are …

Federated large language models: Current progress and future directions

Y Yao, J Zhang, J Wu, C Huang, Y **a, T Yu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …

Federa: Efficient fine-tuning of language models in federated learning leveraging weight decomposition

Y Yan, Q Yang, S Tang, Z Shi - arxiv preprint arxiv:2404.18848, 2024 - arxiv.org
Despite their exceptional performance on various tasks after fine-tuning, pre-trained
language models (PLMs) face significant challenges due to growing privacy concerns with …

Unlocking the potential of prompt-tuning in bridging generalized and personalized federated learning

W Deng, C Thrampoulidis, X Li - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art
performance with improved efficiency in various computer vision tasks. This suggests a …

Fedbpt: Efficient federated black-box prompt tuning for large language models

J Sun, Z Xu, H Yin, D Yang, D Xu, Y Chen… - arxiv preprint arxiv …, 2023 - arxiv.org
Pre-trained language models (PLM) have revolutionized the NLP landscape, achieving
stellar performances across diverse tasks. These models, while benefiting from vast training …

Pilora: Prototype guided incremental lora for federated class-incremental learning

H Guo, F Zhu, W Liu, XY Zhang, CL Liu - European Conference on …, 2024 - Springer
Existing federated learning methods have effectively dealt with decentralized learning in
scenarios involving data privacy and non-IID data. However, in real-world situations, each …

Fedmkt: Federated mutual knowledge transfer for large and small language models

T Fan, G Ma, Y Kang, H Gu, Y Song, L Fan… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent research in federated large language models (LLMs) has primarily focused on
enabling clients to fine-tune their locally deployed homogeneous LLMs collaboratively or on …

FedPE: Adaptive Model Pruning-Expanding for Federated Learning on Mobile Devices

L Yi, X Shi, N Wang, J Zhang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Recently, federated learning (FL) as a new learning paradigm allows multi-party to
collaboratively train a shared global model with privacy protection. However, vanilla FL …