Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Split computing and early exiting for deep learning applications: Survey and research challenges
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep
neural networks (DNNs) to execute complex inference tasks such as image classification …
neural networks (DNNs) to execute complex inference tasks such as image classification …
Fine-tuning language models with just forward passes
Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
Language models are super mario: Absorbing abilities from homologous models as a free lunch
In this paper, we unveil that Language Models (LMs) can acquire new capabilities by
assimilating parameters from homologous models without retraining or GPUs. We first …
assimilating parameters from homologous models without retraining or GPUs. We first …
Adapting language models to compress contexts
A Chevalier, A Wettig, A Ajith, D Chen - ar**
downstream tasks. However, existing methods such as BERT model a single document, and …
downstream tasks. However, existing methods such as BERT model a single document, and …
Finetuned language models are zero-shot learners
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …
language models. We show that instruction tuning--finetuning language models on a …
True few-shot learning with language models
Pretrained language models (LMs) perform well on many tasks even when learning from a
few examples, but prior work uses many held-out examples to tune various aspects of …
few examples, but prior work uses many held-out examples to tune various aspects of …
Documenting large webtext corpora: A case study on the colossal clean crawled corpus
Large language models have led to remarkable progress on many NLP tasks, and
researchers are turning to ever-larger text corpora to train them. Some of the largest corpora …
researchers are turning to ever-larger text corpora to train them. Some of the largest corpora …
Time travel in llms: Tracing data contamination in large language models
Data contamination, ie, the presence of test data from downstream tasks in the training data
of large language models (LLMs), is a potential major issue in measuring LLMs' real …
of large language models (LLMs), is a potential major issue in measuring LLMs' real …