Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
A survey on lora of large language models
Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
Lora learns less and forgets less
Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for
large language models. LoRA saves memory by training only low rank perturbations to …
large language models. LoRA saves memory by training only low rank perturbations to …
Reft: Representation finetuning for language models
Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a
small number of weights. However, much prior interpretability work has shown that …
small number of weights. However, much prior interpretability work has shown that …
MELoRA: mini-ensemble low-rank adapters for parameter-efficient fine-tuning
Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring pre-trained large
language models (LLMs), especially as the models' scale and the diversity of tasks increase …
language models (LLMs), especially as the models' scale and the diversity of tasks increase …
Tied-lora: Enhacing parameter efficiency of lora with weight tying
We propose Tied-LoRA, a simple paradigm utilizes weight tying and selective training to
further increase parameter efficiency of the Low-rank adaptation (LoRA) method. Our …
further increase parameter efficiency of the Low-rank adaptation (LoRA) method. Our …
Lottery ticket adaptation: Mitigating destructive interference in llms
Existing methods for adapting large language models (LLMs) to new tasks are not suited to
multi-task adaptation because they modify all the model weights--causing destructive …
multi-task adaptation because they modify all the model weights--causing destructive …
Lora+: Efficient low rank adaptation of large models
In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in Hu et
al.(2021) leads to suboptimal finetuning of models with large width (embedding dimension) …
al.(2021) leads to suboptimal finetuning of models with large width (embedding dimension) …
Survey of different large language model architectures: Trends, benchmarks, and challenges
Large Language Models (LLMs) represent a class of deep learning models adept at
understanding natural language and generating coherent responses to various prompts or …
understanding natural language and generating coherent responses to various prompts or …
A Practitioner's Guide to Continual Multimodal Pretraining
Multimodal foundation models serve numerous applications at the intersection of vision and
language. Still, despite being pretrained on extensive data, they become outdated over time …
language. Still, despite being pretrained on extensive data, they become outdated over time …