Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Ammus: A survey of transformer-based pretrained models in natural language processing
KS Kalyan, A Rajasekharan, S Sangeetha - arxiv preprint arxiv …, 2021 - arxiv.org
Transformer-based pretrained language models (T-PTLMs) have achieved great success in
almost every NLP task. The evolution of these models started with GPT and BERT. These …
almost every NLP task. The evolution of these models started with GPT and BERT. These …
[HTML][HTML] AMMU: a survey of transformer-based biomedical pretrained language models
Transformer-based pretrained language models (PLMs) have started a new era in modern
natural language processing (NLP). These models combine the power of transformers …
natural language processing (NLP). These models combine the power of transformers …
NusaCrowd: Open source initiative for Indonesian NLP resources
We present NusaCrowd, a collaborative initiative to collect and unify existing resources for
Indonesian languages, including opening access to previously non-public resources …
Indonesian languages, including opening access to previously non-public resources …
Chatgpt beyond english: Towards a comprehensive evaluation of large language models in multilingual learning
Over the last few years, large language models (LLMs) have emerged as the most important
breakthroughs in natural language processing (NLP) that fundamentally transform research …
breakthroughs in natural language processing (NLP) that fundamentally transform research …
Codexglue: A machine learning benchmark dataset for code understanding and generation
Benchmark datasets have a significant impact on accelerating research in programming
language tasks. In this paper, we introduce CodeXGLUE, a benchmark dataset to foster …
language tasks. In this paper, we introduce CodeXGLUE, a benchmark dataset to foster …
Klue: Korean language understanding evaluation
We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE is a
collection of 8 Korean natural language understanding (NLU) tasks, including Topic …
collection of 8 Korean natural language understanding (NLU) tasks, including Topic …
mgpt: Few-shot learners go multilingual
This paper introduces mGPT, a multilingual variant of GPT-3, pretrained on 61 languages
from 25 linguistically diverse language families using Wikipedia and the C4 Corpus. We …
from 25 linguistically diverse language families using Wikipedia and the C4 Corpus. We …
InfoXLM: An information-theoretic framework for cross-lingual language model pre-training
In this work, we present an information-theoretic framework that formulates cross-lingual
language model pre-training as maximizing mutual information between multilingual-multi …
language model pre-training as maximizing mutual information between multilingual-multi …
Multilingual large language model: A survey of resources, taxonomy and frontiers
Multilingual Large Language Models are capable of using powerful Large Language
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Models to handle and respond to queries in multiple languages, which achieves remarkable …
MLQA: Evaluating cross-lingual extractive question answering
Question answering (QA) models have shown rapid progress enabled by the availability of
large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to …
large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to …