Mobile edge intelligence for large language models: A contemporary survey
On-device large language models (LLMs), referring to running LLMs on edge devices, have
raised considerable interest since they are more cost-effective, latency-efficient, and privacy …
raised considerable interest since they are more cost-effective, latency-efficient, and privacy …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Pissa: Principal singular values and singular vectors adaptation of large language models
To parameter-efficiently fine-tune (PEFT) large language models (LLMs), the low-rank
adaptation (LoRA) method approximates the model changes $\Delta W\in\mathbb …
adaptation (LoRA) method approximates the model changes $\Delta W\in\mathbb …
Reft: Representation finetuning for language models
Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via
updates to a small number of weights. However, much prior interpretability work has shown …
updates to a small number of weights. However, much prior interpretability work has shown …
A survey on lora of large language models
Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
MELoRA: mini-ensemble low-rank adapters for parameter-efficient fine-tuning
Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring pre-trained large
language models (LLMs), especially as the models' scale and the diversity of tasks increase …
language models (LLMs), especially as the models' scale and the diversity of tasks increase …
Parameter-efficient orthogonal finetuning via butterfly factorization
Large foundation models are becoming ubiquitous, but training them from scratch is
prohibitively expensive. Thus, efficiently adapting these powerful models to downstream …
prohibitively expensive. Thus, efficiently adapting these powerful models to downstream …
Kind: Knowledge integration and diversion in diffusion models
Pre-trained models have become the preferred backbone due to the expansion of model
parameters, with techniques like Parameter-Efficient Fine-Tuning (PEFTs) typically fixing the …
parameters, with techniques like Parameter-Efficient Fine-Tuning (PEFTs) typically fixing the …
Parameter-efficient fine-tuning in large models: A survey of methodologies
L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
The large models, as predicted by scaling raw forecasts, have made groundbreaking
progress in many fields, particularly in natural language generation tasks, where they have …
progress in many fields, particularly in natural language generation tasks, where they have …
Svfit: Parameter-efficient fine-tuning of large pre-trained models using singular values
Large pre-trained models (LPMs) have demonstrated exceptional performance in diverse
natural language processing and computer vision tasks. However, fully fine-tuning these …
natural language processing and computer vision tasks. However, fully fine-tuning these …