Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Give us the facts: Enhancing large language models with knowledge graphs for fact-aware language modeling
Recently, ChatGPT, a representative large language model (LLM), has gained considerable
attention. Due to their powerful emergent abilities, recent LLMs are considered as a possible …
attention. Due to their powerful emergent abilities, recent LLMs are considered as a possible …
[HTML][HTML] Pre-trained language models and their applications
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
Recent advances in natural language processing via large pre-trained language models: A survey
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
Raptor: Recursive abstractive processing for tree-organized retrieval
Retrieval-augmented language models can better adapt to changes in world state and
incorporate long-tail knowledge. However, most existing methods retrieve only short …
incorporate long-tail knowledge. However, most existing methods retrieve only short …
Learning how to ask: Querying LMs with mixtures of soft prompts
Natural-language prompts have recently been used to coax pretrained language models
into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a …
into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a …
It's not just size that matters: Small language models are also few-shot learners
When scaled to hundreds of billions of parameters, pretrained language models such as
GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous …
GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous …
Large pre-trained language models contain human-like biases of what is right and wrong to do
Artificial writing is permeating our lives due to recent advances in large-scale, transformer-
based language models (LMs) such as BERT, GPT-2 and GPT-3. Using them as pre-trained …
based language models (LMs) such as BERT, GPT-2 and GPT-3. Using them as pre-trained …
Leveraging passage retrieval with generative models for open domain question answering
Generative models for open domain question answering have proven to be competitive,
without resorting to external knowledge. While promising, this approach requires to use …
without resorting to external knowledge. While promising, this approach requires to use …
[HTML][HTML] What disease does this patient have? a large-scale open domain question answering dataset from medical exams
Open domain question answering (OpenQA) tasks have been recently attracting more and
more attention from the natural language processing (NLP) community. In this work, we …
more attention from the natural language processing (NLP) community. In this work, we …