Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Large language models with controllable working memory
Large language models (LLMs) have led to a series of breakthroughs in natural language
processing (NLP), owing to their excellent understanding and generation abilities …
processing (NLP), owing to their excellent understanding and generation abilities …
Unsupervised commonsense question answering with self-talk
Natural language understanding involves reading between the lines with implicit
background knowledge. Current systems either rely on pre-trained language models as the …
background knowledge. Current systems either rely on pre-trained language models as the …
A survey of knowledge enhanced pre-trained language models
Pre-trained language models learn informative word representations on a large-scale text
corpus through self-supervised learning, which has achieved promising performance in …
corpus through self-supervised learning, which has achieved promising performance in …
Blade: Enhancing black-box large language models with small domain-specific models
Large Language Models (LLMs) like ChatGPT and GPT-4 are versatile and capable of
addressing a diverse range of tasks. However, general LLMs, which are developed on open …
addressing a diverse range of tasks. However, general LLMs, which are developed on open …
A comparative analysis of knowledge injection strategies for large language models in the scholarly domain
A Cadeddu, A Chessa, V De Leo, G Fenu… - … Applications of Artificial …, 2024 - Elsevier
In recent years, transformer-based models have emerged as powerful tools for natural
language processing tasks, demonstrating remarkable performance in several domains …
language processing tasks, demonstrating remarkable performance in several domains …
Self-supervised knowledge triplet learning for zero-shot question answering
The aim of all Question Answering (QA) systems is to be able to generalize to unseen
questions. Current supervised methods are reliant on expensive data annotation. Moreover …
questions. Current supervised methods are reliant on expensive data annotation. Moreover …
BERT-kNN: Adding a kNN search component to pretrained language models for better QA
Khandelwal et al.(2020) use a k-nearest-neighbor (kNN) component to improve language
model performance. We show that this idea is beneficial for open-domain question …
model performance. We show that this idea is beneficial for open-domain question …
Kalm: Knowledge-aware integration of local, document, and global contexts for long document understanding
With the advent of pretrained language models (LMs), increasing research efforts have been
focusing on infusing commonsense and domain-specific knowledge to prepare LMs for …
focusing on infusing commonsense and domain-specific knowledge to prepare LMs for …
A novel self-attention enriching mechanism for biomedical question answering
The task of biomedical question answering is a subtask of the more general question
answering task, that is concerned only with biomedical questions. The current state-of-the …
answering task, that is concerned only with biomedical questions. The current state-of-the …
Distilling hypernymy relations from language models: On the effectiveness of zero-shot taxonomy induction
In this paper, we analyze zero-shot taxonomy learning methods which are based on
distilling knowledge from language models via prompting and sentence scoring. We show …
distilling knowledge from language models via prompting and sentence scoring. We show …