Pre-trained models for natural language processing: A survey

X Qiu, T Sun, Y Xu, Y Shao, N Dai, X Huang - Science China …, 2020 - Springer
Recently, the emergence of pre-trained models (PTMs) has brought natural language
processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs …

Machine knowledge: Creation and curation of comprehensive knowledge bases

G Weikum, XL Dong, S Razniewski… - … and Trends® in …, 2021 - nowpublishers.com
Equip** machines with comprehensive knowledge of the world's entities and their
relationships has been a longstanding goal of AI. Over the last decade, large-scale …

Autoprompt: Eliciting knowledge from language models with automatically generated prompts

T Shin, Y Razeghi, RL Logan IV, E Wallace… - arxiv preprint arxiv …, 2020 - arxiv.org
The remarkable success of pretrained language models has motivated the study of what
kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the …

[HTML][HTML] Pre-trained models: Past, present and future

X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu… - AI Open, 2021 - Elsevier
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …

A primer in BERTology: What we know about how BERT works

A Rogers, O Kovaleva, A Rumshisky - Transactions of the Association …, 2021 - direct.mit.edu
Transformer-based models have pushed state of the art in many areas of NLP, but our
understanding of what is behind their success is still limited. This paper is the first survey of …

Exploiting cloze questions for few shot text classification and natural language inference

T Schick, H Schütze - arxiv preprint arxiv:2001.07676, 2020 - arxiv.org
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained
language model with" task descriptions" in natural language (eg, Radford et al., 2019). While …

How can we know what language models know?

Z Jiang, FF Xu, J Araki, G Neubig - Transactions of the Association for …, 2020 - direct.mit.edu
Recent work has presented intriguing results examining the knowledge contained in
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …

Learning how to ask: Querying LMs with mixtures of soft prompts

G Qin, J Eisner - arxiv preprint arxiv:2104.06599, 2021 - arxiv.org
Natural-language prompts have recently been used to coax pretrained language models
into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a …

How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering

Z Jiang, J Araki, H Ding, G Neubig - Transactions of the Association …, 2021 - direct.mit.edu
Recent works have shown that language models (LM) capture different types of knowledge
regarding facts or common sense. However, because no model is perfect, they still fail to …

Are large pre-trained language models leaking your personal information?

J Huang, H Shao, KCC Chang - arxiv preprint arxiv:2205.12628, 2022 - arxiv.org
Are Large Pre-Trained Language Models Leaking Your Personal Information? In this paper,
we analyze whether Pre-Trained Language Models (PLMs) are prone to leaking personal …