Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Chatgpt beyond english: Towards a comprehensive evaluation of large language models in multilingual learning
Over the last few years, large language models (LLMs) have emerged as the most important
breakthroughs in natural language processing (NLP) that fundamentally transform research …
breakthroughs in natural language processing (NLP) that fundamentally transform research …
Universal dependencies
Universal dependencies (UD) is a framework for morphosyntactic annotation of human
language, which to date has been used to create treebanks for more than 100 languages. In …
language, which to date has been used to create treebanks for more than 100 languages. In …
COMET: A neural framework for MT evaluation
We present COMET, a neural framework for training multilingual machine translation
evaluation models which obtains new state-of-the-art levels of correlation with human …
evaluation models which obtains new state-of-the-art levels of correlation with human …
A primer in BERTology: What we know about how BERT works
Transformer-based models have pushed state of the art in many areas of NLP, but our
understanding of what is behind their success is still limited. This paper is the first survey of …
understanding of what is behind their success is still limited. This paper is the first survey of …
CamemBERT: a tasty French language model
Pretrained language models are now ubiquitous in Natural Language Processing. Despite
their success, most available models have either been trained on English data or on the …
their success, most available models have either been trained on English data or on the …
From zero to hero: On the limitations of zero-shot cross-lingual transfer with multilingual transformers
Massively multilingual transformers pretrained with language modeling objectives (eg,
mBERT, XLM-R) have become a de facto default transfer paradigm for zero-shot cross …
mBERT, XLM-R) have become a de facto default transfer paradigm for zero-shot cross …
IndoLEM and IndoBERT: A benchmark dataset and pre-trained language model for Indonesian NLP
Although the Indonesian language is spoken by almost 200 million people and the 10th
most spoken language in the world, it is under-represented in NLP research. Previous work …
most spoken language in the world, it is under-represented in NLP research. Previous work …
Multilingual is not enough: BERT for Finnish
Deep learning-based language models pretrained on large unannotated text corpora have
been demonstrated to allow efficient transfer learning for natural language processing, with …
been demonstrated to allow efficient transfer learning for natural language processing, with …
Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback
A key technology for the development of large language models (LLMs) involves instruction
tuning that helps align the models' responses with human expectations to realize impressive …
tuning that helps align the models' responses with human expectations to realize impressive …
Systematic inequalities in language technology performance across the world's languages
Natural language processing (NLP) systems have become a central technology in
communication, education, medicine, artificial intelligence, and many other domains of …
communication, education, medicine, artificial intelligence, and many other domains of …