Probing classifiers: Promises, shortcomings, and advances
Y Belinkov - Computational Linguistics, 2022 - direct.mit.edu
Probing classifiers have emerged as one of the prominent methodologies for interpreting
and analyzing deep neural network models of natural language processing. The basic idea …
and analyzing deep neural network models of natural language processing. The basic idea …
Analysis methods in neural language processing: A survey
The field of natural language processing has seen impressive progress in recent years, with
neural network models replacing many of the traditional systems. A plethora of new models …
neural network models replacing many of the traditional systems. A plethora of new models …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Bertology meets biology: Interpreting attention in protein language models
Transformer architectures have proven to learn useful representations for protein
classification and generation tasks. However, these representations present challenges in …
classification and generation tasks. However, these representations present challenges in …
Compositionality decomposed: How do neural networks generalise?
Despite a multitude of empirical studies, little consensus exists on whether neural networks
are able to generalise compositionally, a controversy that, in part, stems from a lack of …
are able to generalise compositionally, a controversy that, in part, stems from a lack of …
Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure
We investigate how neural networks can learn and process languages with hierarchical,
compositional semantics. To this end, we define the artifical task of processing nested …
compositional semantics. To this end, we define the artifical task of processing nested …
Towards faithful model explanation in nlp: A survey
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …
understand. This has given rise to numerous efforts towards model explainability in recent …
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
Semantic structure in deep learning
E Pavlick - Annual Review of Linguistics, 2022 - annualreviews.org
Deep learning has recently come to dominate computational linguistics, leading to claims of
human-level performance in a range of language processing tasks. Like much previous …
human-level performance in a range of language processing tasks. Like much previous …
Making transformers solve compositional tasks
Several studies have reported the inability of Transformer models to generalize
compositionally, a key type of generalization in many NLP tasks such as semantic parsing …
compositionally, a key type of generalization in many NLP tasks such as semantic parsing …