Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Efficient large language models: A survey
Large Language Models (LLMs) have demonstrated remarkable capabilities in important
tasks such as natural language understanding and language generation, and thus have the …
tasks such as natural language understanding and language generation, and thus have the …
Galore: Memory-efficient llm training by gradient low-rank projection
Training Large Language Models (LLMs) presents significant memory challenges,
predominantly due to the growing size of weights and optimizer states. Common memory …
predominantly due to the growing size of weights and optimizer states. Common memory …
Lorahub: Efficient cross-task generalization via dynamic lora composition
Low-rank adaptations (LoRA) are often employed to fine-tune large language models
(LLMs) for new tasks. This paper investigates LoRA composability for cross-task …
(LLMs) for new tasks. This paper investigates LoRA composability for cross-task …
End-edge-cloud collaborative computing for deep learning: A comprehensive survey
The booming development of deep learning applications and services heavily relies on
large deep learning models and massive data in the cloud. However, cloud-based deep …
large deep learning models and massive data in the cloud. However, cloud-based deep …
Efficient multimodal large language models: A survey
In the past year, Multimodal Large Language Models (MLLMs) have demonstrated
remarkable performance in tasks such as visual question answering, visual understanding …
remarkable performance in tasks such as visual question answering, visual understanding …
Hydralora: An asymmetric lora architecture for efficient fine-tuning
Abstract Adapting Large Language Models (LLMs) to new tasks through fine-tuning has
been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) …
been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) …
Delta-lora: Fine-tuning high-rank parameters with the delta of low-rank matrices
In this paper, we present Delta-LoRA, which is a novel parameter-efficient approach to fine-
tune large language models (LLMs). In contrast to LoRA and other low-rank adaptation …
tune large language models (LLMs). In contrast to LoRA and other low-rank adaptation …
A survey on lora of large language models
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …