Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, J Zhang, SQ Zhang - arxiv preprint arxiv:2403.14608, 2024 - arxiv.org
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …

Magis: Llm-based multi-agent framework for github issue resolution

W Tao, Y Zhou, Y Wang, W Zhang… - Advances in Neural …, 2025 - proceedings.neurips.cc
In software development, resolving the emergent issues within GitHub repositories is a
complex challenge that involves not only the incorporation of new code but also the …

Instructzero: Efficient instruction optimization for black-box large language models

L Chen, J Chen, T Goldstein, H Huang… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models~(LLMs) are instruction followers, but it can be challenging to find
the best instruction for different situations, especially for black-box LLMs on which …

A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness

B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024 - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …

Survey of different large language model architectures: Trends, benchmarks, and challenges

M Shao, A Basit, R Karri, M Shafique - IEEE Access, 2024 - ieeexplore.ieee.org
Large Language Models (LLMs) represent a class of deep learning models adept at
understanding natural language and generating coherent responses to various prompts or …

Fighting randomness with randomness: Mitigating optimisation instability of fine-tuning using delayed ensemble and noisy interpolation

B Pecher, J Cegin, R Belanec, J Simko, I Srba… - arxiv preprint arxiv …, 2024 - arxiv.org
While fine-tuning of pre-trained language models generally helps to overcome the lack of
labelled training samples, it also displays model performance instability. This instability …

Parameter-efficient fine-tuning in large models: A survey of methodologies

L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
The large models, as predicted by scaling raw forecasts, have made groundbreaking
progress in many fields, particularly in natural language generation tasks, where they have …

xLSTM-Mixer: Multivariate Time Series Forecasting by Mixing via Scalar Memories

M Kraus, F Divo, DS Dhami, K Kersting - arxiv preprint arxiv:2410.16928, 2024 - arxiv.org
Time series data is prevalent across numerous fields, necessitating the development of
robust and accurate forecasting models. Capturing patterns both within and between …

Efficient Knowledge Transfer and Adaptation for Speech and Beyond

U Cappellazzo - 2025 - iris.unitn.it
This thesis advances the field of efficient knowledge transfer and adaptation in the realm of
speech processing. It is structured to address the limitations of transfer learning in …

Does Example Selection for In-Context Learning Amplify the Biases of Large Language Models?

X Guo, J Gao, J Zhou, J Zhang, X Zhao, X Yao, X Wei - openreview.net
In-context learning (ICL) has proven to be adept at adapting large language models (LLMs)
to downstream tasks without parameter updates, based on a few demonstration examples …