A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
A review on language models as knowledge bases
B AlKhamissi, M Li, A Celikyilmaz, M Diab… - ar** artificial intelligence (AI) systems to process
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …
BioGPT: generative pre-trained transformer for biomedical text generation and mining
Pre-trained language models have attracted increasing attention in the biomedical domain,
inspired by their great success in the general natural language domain. Among the two main …
inspired by their great success in the general natural language domain. Among the two main …
Unified named entity recognition as word-word relation classification
So far, named entity recognition (NER) has been involved with three major types, including
flat, overlapped (aka. nested), and discontinuous NER, which have mostly been studied …
flat, overlapped (aka. nested), and discontinuous NER, which have mostly been studied …
[HTML][HTML] Ptr: Prompt tuning with rules for text classification
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
Template-based named entity recognition using BART
There is a recent interest in investigating few-shot NER, where the low-resource target
domain has different label sets compared with a resource-rich source domain. Existing …
domain has different label sets compared with a resource-rich source domain. Existing …
Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction
Recently, prompt-tuning has achieved promising results for specific few-shot classification
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …
REBEL: Relation extraction by end-to-end language generation
Extracting relation triplets from raw text is a crucial task in Information Extraction, enabling
multiple applications such as populating or validating knowledge bases, factchecking, and …
multiple applications such as populating or validating knowledge bases, factchecking, and …