Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive overview of large language models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …
natural language processing tasks and beyond. This success of LLMs has led to a large …
Neural prompt search
The size of vision models has grown exponentially over the last few years, especially after
the emergence of Vision Transformer. This has motivated the development of parameter …
the emergence of Vision Transformer. This has motivated the development of parameter …
On the effectiveness of parameter-efficient fine-tuning
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
Deep model fusion: A survey
Deep model fusion/merging is an emerging technique that merges the parameters or
predictions of multiple deep learning models into a single one. It combines the abilities of …
predictions of multiple deep learning models into a single one. It combines the abilities of …
A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
Exploring adapter-based transfer learning for recommender systems: Empirical studies and practical insights
Adapters, a plug-in neural network module with some tunable parameters, have emerged as
a parameter-efficient transfer learning technique for adapting pre-trained models to …
a parameter-efficient transfer learning technique for adapting pre-trained models to …
Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes
Pre-trained large language models (LLMs) need fine-tuning to improve their responsiveness
to natural language instructions. Federated learning offers a way to fine-tune LLMs using the …
to natural language instructions. Federated learning offers a way to fine-tune LLMs using the …
Exploring the capabilities of llms for code change related tasks
Developers deal with code-change-related tasks daily, eg, reviewing code. Pre-trained code
and code-change-oriented models have been adapted to help developers with such tasks …
and code-change-oriented models have been adapted to help developers with such tasks …
Astraios: Parameter-efficient instruction tuning code large language models
The high cost of full-parameter fine-tuning (FFT) of Large Language Models (LLMs) has led
to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear …
to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear …
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning
Large pretrained language models are widely used in downstream NLP tasks via task-
specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine …
specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine …