A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

CE-LoRA: Computation-Efficient LoRA Fine-Tuning for Language Models

G Chen, Y He, Y Hu, K Yuan, B Yuan - arxiv preprint arxiv:2502.01378, 2025 - arxiv.org
Large Language Models (LLMs) demonstrate exceptional performance across various tasks
but demand substantial computational resources even for fine-tuning computation. Although …

Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large Language Models

W Fang, DJ Han, L Yuan, S Hosseinalipour… - arxiv preprint arxiv …, 2025 - arxiv.org
Fine-tuning large language models (LLMs) on devices is attracting increasing interest.
Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning …

Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models

A Fernandez-Lopez, S Liu, L Yin, S Petridis… - arxiv preprint arxiv …, 2024 - arxiv.org
This paper investigates the under-explored area of low-rank weight training for large-scale
Conformer-based speech recognition models from scratch. Our study demonstrates the …

I3S: Importance Sampling Subspace Selection for Low-Rank Optimization in LLM Pretraining

H Zhang, J Yin, G Wang, Z Liu, T Zhang… - arxiv preprint arxiv …, 2025 - arxiv.org
Low-rank optimization has emerged as a promising approach to enabling memory-efficient
training of large language models (LLMs). Existing low-rank optimization methods typically …

CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation

Z Liu, R Zhang, Z Wang, Z Yang, P Hovland… - arxiv preprint arxiv …, 2025 - arxiv.org
Large language models (LLMs) are revolutionizing many science and engineering fields.
However, their huge model sizes impose extremely demanding needs of computational …

[PDF][PDF] Low-Rank Adaptation for Scalable Fine-Tuning of Pre-Trained Language Models

H Dong, J Shun - 2025 - preprints.org
Low-Rank Adaptation (LoRA) is a computationally efficient approach for fine-tuning large
pre-trained language models, designed to reduce memory and computational overhead by …

[PDF][PDF] Fine-Tuning Transformers Efficiently: A Survey on LoRA and Its Impact

M Huan, J Shun - 2025 - preprints.org
The rapid growth of Large Language Models (LLMs) has revolutionized natural language
processing (NLP), enabling remarkable advancements in text generation, machine …

Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization

Z Mo, LK Huang, SJ Pan - The Thirteenth International Conference on … - openreview.net
Pretraining large language models often requires significant computational resources and
memory due to their vast parameter amount. An effective approach to enhance parameter …

[HTML][HTML] Approximations may be all you need: Towards pre-training LLMs with low-rank decomposition and optimizers

N Shivagunde, M Kulkarni, G Karamanolakis… - 2024 - amazon.science
Large language models (LLMs) have achieved remarkable performance on various natural
language processing tasks, but training LLMs at scale is extremely resource-intensive …