Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Dora: Weight-decomposed low-rank adaptation
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its
variants have gained considerable popularity because of avoiding additional inference …
variants have gained considerable popularity because of avoiding additional inference …
A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
Infoprompt: Information-theoretic soft prompt tuning for natural language understanding
Soft prompt tuning achieves superior performances across a wide range of few-shot tasks.
However, the performances of prompt tuning can be highly sensitive to the initialization of …
However, the performances of prompt tuning can be highly sensitive to the initialization of …
Aprompt: Attention prompt tuning for efficient adaptation of pre-trained language models
With the continuous growth of large language models, the process of fine-tuning these
models for new tasks has become increasingly parameter-intensive. Prompt tuning, a …
models for new tasks has become increasingly parameter-intensive. Prompt tuning, a …
Prompting language-informed distribution for compositional zero-shot learning
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional
visual concepts, eg., sliced tomatoes, where the model is learned only from the seen …
visual concepts, eg., sliced tomatoes, where the model is learned only from the seen …
Extending Whisper with prompt tuning to target-speaker ASR
Target-speaker automatic speech recognition (ASR) aims to transcribe the desired speech
of a target speaker from multi-talker overlapped utterances. Most of the existing target …
of a target speaker from multi-talker overlapped utterances. Most of the existing target …
[PDF][PDF] Flora: Low-rank core space for n-dimension
Adapting pre-trained foundation models for various downstream tasks has been prevalent in
artificial intelligence. Due to the vast number of tasks and high costs, adjusting all …
artificial intelligence. Due to the vast number of tasks and high costs, adjusting all …
Promptintern: Saving inference costs by internalizing recurrent prompt during large language model fine-tuning
Recent advances in fine-tuning large language models (LLMs) have greatly enhanced their
usage in domain-specific tasks. Despite the success, fine-tuning continues to rely on …
usage in domain-specific tasks. Despite the success, fine-tuning continues to rely on …
Dean: Deactivating the coupled neurons to mitigate fairness-privacy conflicts in large language models
Ensuring awareness of fairness and privacy in Large Language Models (LLMs) is critical.
Interestingly, we discover a counter-intuitive trade-off phenomenon that enhancing an LLM's …
Interestingly, we discover a counter-intuitive trade-off phenomenon that enhancing an LLM's …
Non-intrusive adaptation: Input-centric parameter-efficient fine-tuning for versatile multimodal modeling
Large language models (LLMs) and vision language models (VLMs) demonstrate excellent
performance on a wide range of tasks by scaling up parameter counts from O (10^ 9) to O …
performance on a wide range of tasks by scaling up parameter counts from O (10^ 9) to O …