Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Datasets for large language models: A comprehensive survey
This paper embarks on an exploration into the Large Language Model (LLM) datasets,
which play a crucial role in the remarkable advancements of LLMs. The datasets serve as …
which play a crucial role in the remarkable advancements of LLMs. The datasets serve as …
Using natural language processing to support peer‐feedback in the age of artificial intelligence: A cross‐disciplinary framework and a research agenda
Advancements in artificial intelligence are rapidly increasing. The new‐generation large
language models, such as ChatGPT and GPT‐4, bear the potential to transform educational …
language models, such as ChatGPT and GPT‐4, bear the potential to transform educational …
Efficient streaming language models with attention sinks
Deploying Large Language Models (LLMs) in streaming applications such as multi-round
dialogue, where long interactions are expected, is urgently needed but poses two major …
dialogue, where long interactions are expected, is urgently needed but poses two major …
Gqa: Training generalized multi-query transformer models from multi-head checkpoints
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up
decoder inference. However, MQA can lead to quality degradation, and moreover it may not …
decoder inference. However, MQA can lead to quality degradation, and moreover it may not …
Ul2: Unifying language learning paradigms
Existing pre-trained models are generally geared towards a particular class of problems. To
date, there seems to be still no consensus on what the right architecture and pre-training …
date, there seems to be still no consensus on what the right architecture and pre-training …
Longbench: A bilingual, multitask benchmark for long context understanding
Although large language models (LLMs) demonstrate impressive performance for many
language tasks, most of them can only handle texts a few thousand tokens long, limiting their …
language tasks, most of them can only handle texts a few thousand tokens long, limiting their …
Finetuned language models are zero-shot learners
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …
language models. We show that instruction tuning--finetuning language models on a …
LongT5: Efficient text-to-text transformer for long sequences
Recent work has shown that either (1) increasing the input length or (2) increasing model
size can improve the performance of Transformer-based neural models. In this paper, we …
size can improve the performance of Transformer-based neural models. In this paper, we …
Graph neural networks for natural language processing: A survey
Deep learning has become the dominant approach in addressing various tasks in Natural
Language Processing (NLP). Although text inputs are typically represented as a sequence …
Language Processing (NLP). Although text inputs are typically represented as a sequence …
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
Despite the success, the process of fine-tuning large-scale PLMs brings prohibitive
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …