Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Challenges and applications of large language models
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …
Prompting large language model for machine translation: A case study
Research on prompting has shown excellent performance with little or even no supervised
training across many tasks. However, prompting for machine translation is still under …
training across many tasks. However, prompting for machine translation is still under …
Crosslingual generalization through multitask finetuning
Multitask prompted finetuning (MTF) has been shown to help large language models
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
Prompting palm for translation: Assessing strategies and performance
Large language models (LLMs) that have been trained on multilingual but not parallel text
exhibit a remarkable ability to translate between languages. We probe this ability in an in …
exhibit a remarkable ability to translate between languages. We probe this ability in an in …
Multilingual large language model: A survey of resources, taxonomy and frontiers
Multilingual Large Language Models are capable of using powerful Large Language
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Faithful logical reasoning via symbolic chain-of-thought
While the recent Chain-of-Thought (CoT) technique enhances the reasoning ability of large
language models (LLMs) with the theory of mind, it might still struggle in handling logical …
language models (LLMs) with the theory of mind, it might still struggle in handling logical …
Nonparametric masked language modeling
Existing language models (LMs) predict tokens with a softmax over a finite vocabulary,
which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first …
which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first …
Meet in the middle: A new pre-training paradigm
Most language models (LMs) are trained and applied in an autoregressive left-to-right
fashion, predicting the next token from the preceding ones. However, this ignores that the full …
fashion, predicting the next token from the preceding ones. However, this ignores that the full …
Towards generating functionally correct code edits from natural language issue descriptions
Large language models (LLMs), such as OpenAI's Codex, have demonstrated their potential
to generate code from natural language descriptions across a wide range of programming …
to generate code from natural language descriptions across a wide range of programming …
This land is Your, My land: Evaluating geopolitical bias in language models through territorial disputes
Abstract Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained
large language model (LLM) may answer differently if asked in the languages of each …
large language model (LLM) may answer differently if asked in the languages of each …