A review of deep learning techniques for speech processing

A Mehrish, N Majumder, R Bharadwaj, R Mihalcea… - Information …, 2023‏ - Elsevier
The field of speech processing has undergone a transformative shift with the advent of deep
learning. The use of multiple processing layers has enabled the creation of models capable …

A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025‏ - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

Fine-tuning language models with just forward passes

S Malladi, T Gao, E Nichani… - Advances in …, 2023‏ - proceedings.neurips.cc
Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …

Task arithmetic in the tangent space: Improved editing of pre-trained models

G Ortiz-Jimenez, A Favero… - Advances in Neural …, 2024‏ - proceedings.neurips.cc
Task arithmetic has recently emerged as a cost-effective and scalable approach to edit pre-
trained models directly in weight space: By adding the fine-tuned weights of different tasks …

Sample based explanations via generalized representers

CP Tsai, CK Yeh, P Ravikumar - Advances in Neural …, 2024‏ - proceedings.neurips.cc
We propose a general class of sample based explanations of machine learning models,
which we term generalized representers. To measure the effect of a training sample on a …