Pre-trained models for natural language processing: A survey
Recently, the emergence of pre-trained models (PTMs) has brought natural language
processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs …
processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs …
Machine knowledge: Creation and curation of comprehensive knowledge bases
Equip** machines with comprehensive knowledge of the world's entities and their
relationships has been a longstanding goal of AI. Over the last decade, large-scale …
relationships has been a longstanding goal of AI. Over the last decade, large-scale …
Autoprompt: Eliciting knowledge from language models with automatically generated prompts
The remarkable success of pretrained language models has motivated the study of what
kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the …
kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the …
[HTML][HTML] Pre-trained models: Past, present and future
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
A primer in BERTology: What we know about how BERT works
Transformer-based models have pushed state of the art in many areas of NLP, but our
understanding of what is behind their success is still limited. This paper is the first survey of …
understanding of what is behind their success is still limited. This paper is the first survey of …
Exploiting cloze questions for few shot text classification and natural language inference
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained
language model with" task descriptions" in natural language (eg, Radford et al., 2019). While …
language model with" task descriptions" in natural language (eg, Radford et al., 2019). While …
How can we know what language models know?
Recent work has presented intriguing results examining the knowledge contained in
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …
Learning how to ask: Querying LMs with mixtures of soft prompts
Natural-language prompts have recently been used to coax pretrained language models
into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a …
into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a …
How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
Recent works have shown that language models (LM) capture different types of knowledge
regarding facts or common sense. However, because no model is perfect, they still fail to …
regarding facts or common sense. However, because no model is perfect, they still fail to …
Are large pre-trained language models leaking your personal information?
Are Large Pre-Trained Language Models Leaking Your Personal Information? In this paper,
we analyze whether Pre-Trained Language Models (PLMs) are prone to leaking personal …
we analyze whether Pre-Trained Language Models (PLMs) are prone to leaking personal …