Robust natural language processing: Recent advances, challenges, and future directions

M Omar, S Choi, DH Nyang, D Mohaisen - IEEE Access, 2022 - ieeexplore.ieee.org
Recent natural language processing (NLP) techniques have accomplished high
performance on benchmark data sets, primarily due to the significant improvement in the …

Recent advances of foundation language models-based continual learning: A survey

Y Yang, J Zhou, X Ding, T Huai, S Liu, Q Chen… - ACM Computing …, 2025 - dl.acm.org
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing and computer vision. Unlike traditional neural …

Calibrate before use: Improving few-shot performance of language models

Z Zhao, E Wallace, S Feng, D Klein… - … on machine learning, 2021 - proceedings.mlr.press
GPT-3 can perform numerous tasks when provided a natural language prompt that contains
a few training examples. We show that this type of few-shot learning can be unstable: the …

Learning transferable visual models from natural language supervision

A Radford, JW Kim, C Hallacy… - International …, 2021 - proceedings.mlr.press
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories. This restricted form of supervision limits their generality and usability since …

A primer in BERTology: What we know about how BERT works

A Rogers, O Kovaleva, A Rumshisky - Transactions of the association …, 2021 - direct.mit.edu
Transformer-based models have pushed state of the art in many areas of NLP, but our
understanding of what is behind their success is still limited. This paper is the first survey of …

[PDF][PDF] Language models are unsupervised multitask learners

A Radford, J Wu, R Child, D Luan… - OpenAI …, 2019 - storage.prod.researchhub.com
Natural language processing tasks, such as question answering, machine translation,
reading comprehension, and summarization, are typically approached with supervised …

oLMpics-on what language model pre-training captures

A Talmor, Y Elazar, Y Goldberg… - Transactions of the …, 2020 - direct.mit.edu
Recent success of pre-trained language models (LMs) has spurred widespread interest in
the language capabilities that they possess. However, efforts to understand whether LM …

Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts

A Kolides, A Nawaz, A Rathor, D Beeman… - … Modelling Practice and …, 2023 - Elsevier
With the emergence of foundation models (FMs) that are trained on large amounts of data at
scale and adaptable to a wide range of downstream applications, AI is experiencing a …

Selective question answering under domain shift

A Kamath, R Jia, P Liang - arxiv preprint arxiv:2006.09462, 2020 - arxiv.org
To avoid giving wrong answers, question answering (QA) models need to know when to
abstain from answering. Moreover, users often ask questions that diverge from the model's …

The effect of natural distribution shift on question answering models

J Miller, K Krauth, B Recht… - … conference on machine …, 2020 - proceedings.mlr.press
We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and
evaluate the ability of question-answering systems to generalize to new data. Our first test …