A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning

W Kuang, B Qian, Z Li, D Chen, D Gao, X Pan… - Proceedings of the 30th …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated great capabilities in various natural
language understanding and generation tasks. These pre-trained LLMs can be further …

Heterogeneous lora for federated fine-tuning of on-device foundation models

YJ Cho, L Liu, Z Xu, A Fahrezi, G Joshi - arxiv preprint arxiv:2401.06432, 2024 - arxiv.org
Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and
federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs …

Data-juicer: A one-stop data processing system for large language models

D Chen, Y Huang, Z Ma, H Chen, X Pan, C Ge… - Companion of the 2024 …, 2024 - dl.acm.org
The immense evolution in Large Language Models (LLMs) has underscored the importance
of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from …

Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes

Z Qin, D Chen, B Qian, B Ding, Y Li, S Deng - arxiv preprint arxiv …, 2023 - arxiv.org
Pre-trained large language models (LLMs) need fine-tuning to improve their responsiveness
to natural language instructions. Federated learning offers a way to fine-tune LLMs using the …

[HTML][HTML] Knowledge-Empowered, Collaborative, and Co-Evolving AI Models: The Post-LLM Roadmap

F Wu, T Shen, T Bäck, J Chen, G Huang, Y **… - Engineering, 2024 - Elsevier
Large language models (LLMs) have significantly advanced artificial intelligence (AI) by
excelling in tasks such as understanding, generation, and reasoning across multiple …

On the convergence of zeroth-order federated tuning for large language models

Z Ling, D Chen, L Yao, Y Li, Y Shen - Proceedings of the 30th ACM …, 2024 - dl.acm.org
The confluence of Federated Learning (FL) and Large Language Models (LLMs) is ushering
in a new era in privacy-preserving natural language processing. However, the intensive …

Federated lora with sparse communication

K Kuo, A Raje, K Rajesh, V Smith - arxiv preprint arxiv:2406.05233, 2024 - arxiv.org
Low-rank adaptation (LoRA) is a natural method for finetuning in communication-
constrained machine learning settings such as cross-device federated learning. Prior work …

FDLoRA: personalized federated learning of large language model via dual LoRA tuning

J Qi, Z Luan, S Huang, C Fung, H Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have emerged as important components across various
fields, yet their training requires substantial computation resources and abundant labeled …

The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective

Z Qin, D Chen, W Zhang, L Yao, Y Huang… - arxiv preprint arxiv …, 2024 - arxiv.org
The rapid development of large language models (LLMs) has been witnessed in recent
years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from …