[HTML][HTML] Pre-trained language models and their applications

H Wang, J Li, H Wu, E Hovy, Y Sun - Engineering, 2023 - Elsevier
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …

A survey on knowledge graphs: Representation, acquisition, and applications

S Ji, S Pan, E Cambria, P Marttinen… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Human knowledge provides a formal understanding of the world. Knowledge graphs that
represent structural relations between entities have become an increasingly popular …

Unifying large language models and knowledge graphs: A roadmap

S Pan, L Luo, Y Wang, C Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the
field of natural language processing and artificial intelligence, due to their emergent ability …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

[HTML][HTML] Pre-trained models: Past, present and future

X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu… - AI Open, 2021 - Elsevier
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …

LUKE: Deep contextualized entity representations with entity-aware self-attention

I Yamada, A Asai, H Shindo, H Takeda… - arxiv preprint arxiv …, 2020 - arxiv.org
Entity representations are useful in natural language tasks involving entities. In this paper,
we propose new pretrained contextualized representations of words and entities based on …

Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction

X Chen, N Zhang, X **e, S Deng, Y Yao, C Tan… - Proceedings of the …, 2022 - dl.acm.org
Recently, prompt-tuning has achieved promising results for specific few-shot classification
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …

Domain-specific language model pretraining for biomedical natural language processing

Y Gu, R Tinn, H Cheng, M Lucas, N Usuyama… - ACM Transactions on …, 2021 - dl.acm.org
Pretraining large neural language models, such as BERT, has led to impressive gains on
many natural language processing (NLP) tasks. However, most pretraining efforts focus on …

A frustratingly easy approach for entity and relation extraction

Z Zhong, D Chen - arxiv preprint arxiv:2010.12812, 2020 - arxiv.org
End-to-end relation extraction aims to identify named entities and extract relations between
them. Most recent work models these two subtasks jointly, either by casting them in one …

How can we know what language models know?

Z Jiang, FF Xu, J Araki, G Neubig - Transactions of the Association for …, 2020 - direct.mit.edu
Recent work has presented intriguing results examining the knowledge contained in
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …