Analysis methods in neural language processing: A survey

Y Belinkov, J Glass - … of the Association for Computational Linguistics, 2019 - direct.mit.edu
The field of natural language processing has seen impressive progress in recent years, with
neural network models replacing many of the traditional systems. A plethora of new models …

Paradigm shift in natural language processing

TX Sun, XY Liu, XP Qiu, XJ Huang - Machine Intelligence Research, 2022 - Springer
In the era of deep learning, modeling for most natural language processing (NLP) tasks has
converged into several mainstream paradigms. For example, we usually adopt the …

Multitask prompted training enables zero-shot task generalization

V Sanh, A Webson, C Raffel, SH Bach… - arxiv preprint arxiv …, 2021 - arxiv.org
Large language models have recently been shown to attain reasonable zero-shot
generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that …

[PDF][PDF] BERT rediscovers the classical NLP pipeline

I Tenney - arxiv preprint arxiv:1905.05950, 2019 - fq.pkwyx.com
Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We
focus on one such model, BERT, and aim to quantify where linguistic information is captured …

Superglue: A stickier benchmark for general-purpose language understanding systems

A Wang, Y Pruksachatkun, N Nangia… - Advances in neural …, 2019 - proceedings.neurips.cc
In the last year, new models and methods for pretraining and transfer learning have driven
striking performance improvements across a range of language understanding tasks. The …

BoolQ: Exploring the surprising difficulty of natural yes/no questions

C Clark, K Lee, MW Chang, T Kwiatkowski… - arxiv preprint arxiv …, 2019 - arxiv.org
In this paper we study yes/no questions that are naturally occurring---meaning that they are
generated in unprompted and unconstrained settings. We build a reading comprehension …

What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models

A Ettinger - Transactions of the Association for Computational …, 2020 - direct.mit.edu
Pre-training by language modeling has become a popular and successful approach to NLP
tasks, but we have yet to understand exactly what linguistic capacities these pre-training …

Inherent disagreements in human textual inferences

E Pavlick, T Kwiatkowski - Transactions of the Association for …, 2019 - direct.mit.edu
We analyze human's disagreements about the validity of natural language inferences. We
show that, very often, disagreements are not dismissible as annotation “noise”, but rather …

What will it take to fix benchmarking in natural language understanding?

SR Bowman, GE Dahl - arxiv preprint arxiv:2104.02145, 2021 - arxiv.org
Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and
biased systems score so highly on standard benchmarks that there is little room for …

[PERNYATAAN][C] Transformers: State-of-the-Art Natural Language Processing

T Wolf - arxiv preprint arxiv:1910.03771, 2020