[HTML][HTML] Pre-trained language models and their applications
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
A survey on knowledge graphs: Representation, acquisition, and applications
Human knowledge provides a formal understanding of the world. Knowledge graphs that
represent structural relations between entities have become an increasingly popular …
represent structural relations between entities have become an increasingly popular …
Unifying large language models and knowledge graphs: A roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the
field of natural language processing and artificial intelligence, due to their emergent ability …
field of natural language processing and artificial intelligence, due to their emergent ability …
[HTML][HTML] Ptr: Prompt tuning with rules for text classification
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
[HTML][HTML] Pre-trained models: Past, present and future
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
LUKE: Deep contextualized entity representations with entity-aware self-attention
Entity representations are useful in natural language tasks involving entities. In this paper,
we propose new pretrained contextualized representations of words and entities based on …
we propose new pretrained contextualized representations of words and entities based on …
Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction
Recently, prompt-tuning has achieved promising results for specific few-shot classification
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …
Domain-specific language model pretraining for biomedical natural language processing
Pretraining large neural language models, such as BERT, has led to impressive gains on
many natural language processing (NLP) tasks. However, most pretraining efforts focus on …
many natural language processing (NLP) tasks. However, most pretraining efforts focus on …
A frustratingly easy approach for entity and relation extraction
End-to-end relation extraction aims to identify named entities and extract relations between
them. Most recent work models these two subtasks jointly, either by casting them in one …
them. Most recent work models these two subtasks jointly, either by casting them in one …
How can we know what language models know?
Recent work has presented intriguing results examining the knowledge contained in
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …