Mobile edge intelligence for large language models: A contemporary survey

G Qu, Q Chen, W Wei, Z Lin, X Chen… - … Surveys & Tutorials, 2025 - ieeexplore.ieee.org
On-device large language models (LLMs), referring to running LLMs on edge devices, have
raised considerable interest since they are more cost-effective, latency-efficient, and privacy …

Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment

L Xu, H **e, SZJ Qin, X Tao, FL Wang - arxiv preprint arxiv:2312.12148, 2023 - arxiv.org
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …

Pissa: Principal singular values and singular vectors adaptation of large language models

F Meng, Z Wang, M Zhang - Advances in Neural …, 2025 - proceedings.neurips.cc
To parameter-efficiently fine-tune (PEFT) large language models (LLMs), the low-rank
adaptation (LoRA) method approximates the model changes $\Delta W\in\mathbb …

Reft: Representation finetuning for language models

Z Wu, A Arora, Z Wang, A Geiger… - Advances in …, 2025 - proceedings.neurips.cc
Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via
updates to a small number of weights. However, much prior interpretability work has shown …

A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

MELoRA: mini-ensemble low-rank adapters for parameter-efficient fine-tuning

P Ren, C Shi, S Wu, M Zhang, Z Ren… - Proceedings of the …, 2024 - aclanthology.org
Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring pre-trained large
language models (LLMs), especially as the models' scale and the diversity of tasks increase …

Parameter-efficient orthogonal finetuning via butterfly factorization

W Liu, Z Qiu, Y Feng, Y **u, Y Xue, L Yu… - arxiv preprint arxiv …, 2023 - arxiv.org
Large foundation models are becoming ubiquitous, but training them from scratch is
prohibitively expensive. Thus, efficiently adapting these powerful models to downstream …

Kind: Knowledge integration and diversion in diffusion models

Y **e, F Feng, J Wang, X Geng, Y Rui - arxiv preprint arxiv:2408.07337, 2024 - arxiv.org
Pre-trained models have become the preferred backbone due to the expansion of model
parameters, with techniques like Parameter-Efficient Fine-Tuning (PEFTs) typically fixing the …

Parameter-efficient fine-tuning in large models: A survey of methodologies

L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
The large models, as predicted by scaling raw forecasts, have made groundbreaking
progress in many fields, particularly in natural language generation tasks, where they have …

Svfit: Parameter-efficient fine-tuning of large pre-trained models using singular values

C Sun, J Wei, Y Wu, Y Shi, S He, Z Ma, N **e… - arxiv preprint arxiv …, 2024 - arxiv.org
Large pre-trained models (LPMs) have demonstrated exceptional performance in diverse
natural language processing and computer vision tasks. However, fully fine-tuning these …