A survey on lora of large language models
Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
Heterogeneous lora for federated fine-tuning of on-device foundation models
Foundation models (FMs) adapt surprisingly well to downstream tasks with fine-tuning.
However, their colossal parameter space prohibits their training on resource-constrained …
However, their colossal parameter space prohibits their training on resource-constrained …
Fedbiot: Llm local fine-tuning in federated learning without full model
Large language models (LLMs) show amazing performance on many domain-specific tasks
after fine-tuning with some appropriate data. However, many domain-specific data are …
after fine-tuning with some appropriate data. However, many domain-specific data are …
Federated large language models: Current progress and future directions
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …
world applications. While the quality of training data is essential, privacy concerns arise …
Federa: Efficient fine-tuning of language models in federated learning leveraging weight decomposition
Despite their exceptional performance on various tasks after fine-tuning, pre-trained
language models (PLMs) face significant challenges due to growing privacy concerns with …
language models (PLMs) face significant challenges due to growing privacy concerns with …
Unlocking the potential of prompt-tuning in bridging generalized and personalized federated learning
Abstract Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art
performance with improved efficiency in various computer vision tasks. This suggests a …
performance with improved efficiency in various computer vision tasks. This suggests a …
Fedbpt: Efficient federated black-box prompt tuning for large language models
Pre-trained language models (PLM) have revolutionized the NLP landscape, achieving
stellar performances across diverse tasks. These models, while benefiting from vast training …
stellar performances across diverse tasks. These models, while benefiting from vast training …
Pilora: Prototype guided incremental lora for federated class-incremental learning
Existing federated learning methods have effectively dealt with decentralized learning in
scenarios involving data privacy and non-IID data. However, in real-world situations, each …
scenarios involving data privacy and non-IID data. However, in real-world situations, each …
Fedmkt: Federated mutual knowledge transfer for large and small language models
Recent research in federated large language models (LLMs) has primarily focused on
enabling clients to fine-tune their locally deployed homogeneous LLMs collaboratively or on …
enabling clients to fine-tune their locally deployed homogeneous LLMs collaboratively or on …
FedPE: Adaptive Model Pruning-Expanding for Federated Learning on Mobile Devices
Recently, federated learning (FL) as a new learning paradigm allows multi-party to
collaboratively train a shared global model with privacy protection. However, vanilla FL …
collaboratively train a shared global model with privacy protection. However, vanilla FL …