Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive overview of large language models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …
natural language processing tasks and beyond. This success of LLMs has led to a large …
Ammus: A survey of transformer-based pretrained models in natural language processing
KS Kalyan, A Rajasekharan, S Sangeetha - arxiv preprint arxiv …, 2021 - arxiv.org
Transformer-based pretrained language models (T-PTLMs) have achieved great success in
almost every NLP task. The evolution of these models started with GPT and BERT. These …
almost every NLP task. The evolution of these models started with GPT and BERT. These …
NusaCrowd: Open source initiative for Indonesian NLP resources
We present NusaCrowd, a collaborative initiative to collect and unify existing resources for
Indonesian languages, including opening access to previously non-public resources …
Indonesian languages, including opening access to previously non-public resources …
End-to-end transformer-based models in textual-based NLP
Transformer architectures are highly expressive because they use self-attention
mechanisms to encode long-range dependencies in the input sequences. In this paper, we …
mechanisms to encode long-range dependencies in the input sequences. In this paper, we …
Semeval-2022 task 11: Multilingual complex named entity recognition (multiconer)
We present the findings of SemEval-2022 Task 11 on Multilingual Complex Named Entity
Recognition MULTICONER. Divided into 13 tracks, the task focused on methods to identify …
Recognition MULTICONER. Divided into 13 tracks, the task focused on methods to identify …
JGLUE: Japanese general language understanding evaluation
To develop high-performance natural language understanding (NLU) models, it is
necessary to have a benchmark to evaluate and analyze NLU ability from various …
necessary to have a benchmark to evaluate and analyze NLU ability from various …
Language models are few-shot multilingual learners
General-purpose language models have demonstrated impressive capabilities, performing
on par with state-of-the-art approaches on a range of downstream natural language …
on par with state-of-the-art approaches on a range of downstream natural language …
BLOOM+ 1: Adding language support to BLOOM for zero-shot prompting
The BLOOM model is a large publicly available multilingual language model, but its
pretraining was limited to 46 languages. To extend the benefits of BLOOM to other …
pretraining was limited to 46 languages. To extend the benefits of BLOOM to other …
What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs)
trained on hundreds of billion scale data. Here we address some remaining issues less …
trained on hundreds of billion scale data. Here we address some remaining issues less …
On the effect of pretraining corpora on in-context learning by a large-scale language model
Many recent studies on large-scale language models have reported successful in-context
zero-and few-shot learning ability. However, the in-depth analysis of when in-context …
zero-and few-shot learning ability. However, the in-depth analysis of when in-context …