Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Magis: Llm-based multi-agent framework for github issue resolution
In software development, resolving the emergent issues within GitHub repositories is a
complex challenge that involves not only the incorporation of new code but also the …
complex challenge that involves not only the incorporation of new code but also the …
Instructzero: Efficient instruction optimization for black-box large language models
Large language models~(LLMs) are instruction followers, but it can be challenging to find
the best instruction for different situations, especially for black-box LLMs on which …
the best instruction for different situations, especially for black-box LLMs on which …
A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
Survey of different large language model architectures: Trends, benchmarks, and challenges
Large Language Models (LLMs) represent a class of deep learning models adept at
understanding natural language and generating coherent responses to various prompts or …
understanding natural language and generating coherent responses to various prompts or …
Fighting randomness with randomness: Mitigating optimisation instability of fine-tuning using delayed ensemble and noisy interpolation
While fine-tuning of pre-trained language models generally helps to overcome the lack of
labelled training samples, it also displays model performance instability. This instability …
labelled training samples, it also displays model performance instability. This instability …
Parameter-efficient fine-tuning in large models: A survey of methodologies
L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
The large models, as predicted by scaling raw forecasts, have made groundbreaking
progress in many fields, particularly in natural language generation tasks, where they have …
progress in many fields, particularly in natural language generation tasks, where they have …
xLSTM-Mixer: Multivariate Time Series Forecasting by Mixing via Scalar Memories
Time series data is prevalent across numerous fields, necessitating the development of
robust and accurate forecasting models. Capturing patterns both within and between …
robust and accurate forecasting models. Capturing patterns both within and between …
Efficient Knowledge Transfer and Adaptation for Speech and Beyond
U Cappellazzo - 2025 - iris.unitn.it
This thesis advances the field of efficient knowledge transfer and adaptation in the realm of
speech processing. It is structured to address the limitations of transfer learning in …
speech processing. It is structured to address the limitations of transfer learning in …
Does Example Selection for In-Context Learning Amplify the Biases of Large Language Models?
X Guo, J Gao, J Zhou, J Zhang, X Zhao, X Yao, X Wei - openreview.net
In-context learning (ICL) has proven to be adept at adapting large language models (LLMs)
to downstream tasks without parameter updates, based on a few demonstration examples …
to downstream tasks without parameter updates, based on a few demonstration examples …