Linguistic knowledge and transferability of contextual representations

NF Liu, M Gardner, Y Belinkov, ME Peters… - arxiv preprint arxiv …, 2019 - arxiv.org
Contextual word representations derived from large-scale neural language models are
successful across a diverse set of NLP tasks, suggesting that they encode useful and …

On the linguistic representational power of neural machine translation models

Y Belinkov, N Durrani, F Dalvi, H Sajjad… - Computational …, 2020 - direct.mit.edu
Despite the recent success of deep neural networks in natural language processing and
other spheres of artificial intelligence, their interpretability remains a challenge. We analyze …

Logiqa 2.0—an improved dataset for logical reasoning in natural language understanding

H Liu, J Liu, L Cui, Z Teng, N Duan… - … on Audio, Speech …, 2023 - ieeexplore.ieee.org
NLP research on logical reasoning regains momentum with the recent releases of a handful
of datasets, notably LogiQA and Reclor. Logical reasoning is exploited in many probing …

An analysis of encoder representations in transformer-based machine translation

A Raganato, J Tiedemann - Proceedings of the 2018 EMNLP …, 2018 - aclanthology.org
The attention mechanism is a successful technique in modern NLP, especially in tasks like
machine translation. The recently proposed network architecture of the Transformer is based …

Exploring and predicting transferability across NLP tasks

T Vu, T Wang, T Munkhdalai, A Sordoni… - arxiv preprint arxiv …, 2020 - arxiv.org
Recent advances in NLP demonstrate the effectiveness of training large-scale language
models and transferring them to downstream tasks. Can fine-tuning these models on tasks …

A survey on narrative extraction from textual data

B Santana, R Campos, E Amorim, A Jorge… - Artificial Intelligence …, 2023 - Springer
Narratives are present in many forms of human expression and can be understood as a
fundamental way of communication between people. Computational understanding of the …

Compositionality in computational linguistics

L Donatelli, A Koller - Annual Review of Linguistics, 2023 - annualreviews.org
Neural models greatly outperform grammar-based models across many tasks in modern
computational linguistics. This raises the question of whether linguistic principles, such as …

Analyzing individual neurons in pre-trained language models

N Durrani, H Sajjad, F Dalvi, Y Belinkov - arxiv preprint arxiv:2010.02695, 2020 - arxiv.org
While a lot of analysis has been carried to demonstrate linguistic knowledge captured by the
representations learned within deep NLP models, very little attention has been paid towards …

Designing a uniform meaning representation for natural language processing

JEL Van Gysel, M Vigus, J Chun, K Lai, S Moeller… - KI-Künstliche …, 2021 - Springer
In this paper we present Uniform Meaning Representation (UMR), a meaning representation
designed to annotate the semantic content of a text. UMR is primarily based on Abstract …

Discovering latent concepts learned in BERT

F Dalvi, AR Khan, F Alam, N Durrani, J Xu… - arxiv preprint arxiv …, 2022 - arxiv.org
A large number of studies that analyze deep neural network models and their ability to
encode various linguistic and non-linguistic concepts provide an interpretation of the inner …