Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on lora of large language models
Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
CE-LoRA: Computation-Efficient LoRA Fine-Tuning for Language Models
Large Language Models (LLMs) demonstrate exceptional performance across various tasks
but demand substantial computational resources even for fine-tuning computation. Although …
but demand substantial computational resources even for fine-tuning computation. Although …
Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large Language Models
Fine-tuning large language models (LLMs) on devices is attracting increasing interest.
Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning …
Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning …
Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models
This paper investigates the under-explored area of low-rank weight training for large-scale
Conformer-based speech recognition models from scratch. Our study demonstrates the …
Conformer-based speech recognition models from scratch. Our study demonstrates the …
I3S: Importance Sampling Subspace Selection for Low-Rank Optimization in LLM Pretraining
Low-rank optimization has emerged as a promising approach to enabling memory-efficient
training of large language models (LLMs). Existing low-rank optimization methods typically …
training of large language models (LLMs). Existing low-rank optimization methods typically …
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
Large language models (LLMs) are revolutionizing many science and engineering fields.
However, their huge model sizes impose extremely demanding needs of computational …
However, their huge model sizes impose extremely demanding needs of computational …
[PDF][PDF] Low-Rank Adaptation for Scalable Fine-Tuning of Pre-Trained Language Models
H Dong, J Shun - 2025 - preprints.org
Low-Rank Adaptation (LoRA) is a computationally efficient approach for fine-tuning large
pre-trained language models, designed to reduce memory and computational overhead by …
pre-trained language models, designed to reduce memory and computational overhead by …
[PDF][PDF] Fine-Tuning Transformers Efficiently: A Survey on LoRA and Its Impact
M Huan, J Shun - 2025 - preprints.org
The rapid growth of Large Language Models (LLMs) has revolutionized natural language
processing (NLP), enabling remarkable advancements in text generation, machine …
processing (NLP), enabling remarkable advancements in text generation, machine …
Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization
Pretraining large language models often requires significant computational resources and
memory due to their vast parameter amount. An effective approach to enhance parameter …
memory due to their vast parameter amount. An effective approach to enhance parameter …
[HTML][HTML] Approximations may be all you need: Towards pre-training LLMs with low-rank decomposition and optimizers
Large language models (LLMs) have achieved remarkable performance on various natural
language processing tasks, but training LLMs at scale is extremely resource-intensive …
language processing tasks, but training LLMs at scale is extremely resource-intensive …