Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Increlora: Incremental parameter allocation method for parameter-efficient fine-tuning
With the increasing size of pre-trained language models (PLMs), fine-tuning all the
parameters in the model is not efficient, especially when there are a large number of …
parameters in the model is not efficient, especially when there are a large number of …
Survival of the most influential prompts: Efficient black-box prompt search via clustering and pruning
Prompt-based learning has been an effective paradigm for large pretrained language
models (LLM), enabling few-shot or even zero-shot learning. Black-box prompt search has …
models (LLM), enabling few-shot or even zero-shot learning. Black-box prompt search has …
Increasing model capacity for free: A simple strategy for parameter efficient fine-tuning
Fine-tuning large pre-trained foundation models, such as the 175B GPT-3, has attracted
more attention for downstream tasks recently. While parameter-efficient fine-tuning methods …
more attention for downstream tasks recently. While parameter-efficient fine-tuning methods …
Propulsion: Steering LLM with Tiny Fine-Tuning
The rapid advancements in Large Language Models (LLMs) have revolutionized natural
language processing (NLP) and related fields. However, fine-tuning these models for …
language processing (NLP) and related fields. However, fine-tuning these models for …
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
We propose RoCoFT, a parameter-efficient fine-tuning method for large-scale language
models (LMs) based on updating only a few rows and columns of the weight matrices in …
models (LMs) based on updating only a few rows and columns of the weight matrices in …
Decomposed prompt tuning via low-rank reparameterization
While prompt tuning approaches have achieved competitive performance with high
efficiency, we observe that they invariably employ the same initialization process, wherein …
efficiency, we observe that they invariably employ the same initialization process, wherein …
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
H Zhao, J Fu, Z He - arxiv preprint arxiv:2310.11670, 2023 - arxiv.org
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-
trained language models to downstream tasks while only updating a small number of …
trained language models to downstream tasks while only updating a small number of …
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization
D Pu, V Demberg - arxiv preprint arxiv:2405.00657, 2024 - arxiv.org
For long document summarization, discourse structure is important to discern the key
content of the text and the differences in importance level between sentences. Unfortunately …
content of the text and the differences in importance level between sentences. Unfortunately …