A review of deep learning techniques for speech processing

A Mehrish, N Majumder, R Bharadwaj, R Mihalcea… - Information …, 2023 - Elsevier
The field of speech processing has undergone a transformative shift with the advent of deep
learning. The use of multiple processing layers has enabled the creation of models capable …

Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, J Zhang, SQ Zhang - arxiv preprint arxiv:2403.14608, 2024 - arxiv.org
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …

Llama-adapter: Efficient fine-tuning of language models with zero-init attention

R Zhang, J Han, C Liu, P Gao, A Zhou, X Hu… - arxiv preprint arxiv …, 2023 - arxiv.org
We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA
into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter …

Parameter-efficient fine-tuning of large-scale pre-trained language models

N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su… - Nature Machine …, 2023 - nature.com
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …

Fingpt: Democratizing internet-scale data for financial large language models

XY Liu, G Wang, H Yang, D Zha - arxiv preprint arxiv:2307.10485, 2023 - arxiv.org
Large language models (LLMs) have demonstrated remarkable proficiency in
understanding and generating human-like texts, which may potentially revolutionize the …

Pissa: Principal singular values and singular vectors adaptation of large language models

F Meng, Z Wang, M Zhang - Advances in Neural …, 2025 - proceedings.neurips.cc
To parameter-efficiently fine-tune (PEFT) large language models (LLMs), the low-rank
adaptation (LoRA) method approximates the model changes $\Delta W\in\mathbb …

Adalora: Adaptive budget allocation for parameter-efficient fine-tuning

Q Zhang, M Chen, A Bukharin… - arxiv preprint arxiv …, 2023 - arxiv.org
Fine-tuning large pre-trained language models on downstream tasks has become an
important paradigm in NLP. However, common practice fine-tunes all of the parameters in a …

Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment

L Xu, H **e, SZJ Qin, X Tao, FL Wang - arxiv preprint arxiv:2312.12148, 2023 - arxiv.org
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …

[PDF][PDF] Deep class-incremental learning: A survey

DW Zhou, QW Wang, ZH Qi, HJ Ye… - arxiv preprint arxiv …, 2023 - researchgate.net
Deep models, eg, CNNs and Vision Transformers, have achieved impressive achievements
in many vision tasks in the closed world. However, novel classes emerge from time to time in …

Adaptformer: Adapting vision transformers for scalable visual recognition

S Chen, C Ge, Z Tong, J Wang… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Pretraining Vision Transformers (ViTs) has achieved great success in visual
recognition. A following scenario is to adapt a ViT to various image and video recognition …