Do prompt-based models really understand the meaning of their prompts?

A Webson, E Pavlick - Proceedings of the 2022 Conference of the …, 2022 - aclanthology.org
Recently, a boom of papers has shown extraordinary progress in zero-shot and few-shot
learning with various prompt-based models. It is commonly argued that prompts help models …

Word order does matter and shuffled language models know it

M Abdou, V Ravishankar, A Kulmizev… - Proceedings of the 60th …, 2022 - aclanthology.org
Recent studies have shown that language models pretrained and/or fine-tuned on randomly
permuted sentences exhibit competitive performance on GLUE, putting into question the …

The ambiguity of BERTology: what do large language models represent?

T Buder-Gröndahl - Synthese, 2023 - Springer
The field of “BERTology” aims to locate linguistic representations in large language models
(LLMs). These have commonly been interpreted as representing structural descriptions …

Explicitly representing syntax improves sentence-to-layout prediction of unexpected situations

W Nuyts, R Cartuyvels, MF Moens - Transactions of the Association …, 2024 - direct.mit.edu
Recognizing visual entities in a natural language sentence and arranging them in a 2D
spatial layout require a compositional understanding of language and space. This task of …

Measuring the knowledge acquisition-utilization gap in pretrained language models

A Kazemnejad, M Rezagholizadeh… - arxiv preprint arxiv …, 2023 - arxiv.org
While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts
of knowledge, it remains unclear how much of this parametric knowledge is actually usable …

Word order does matter (and shuffled language models know it)

V Ravishankar, M Abdou, A Kulmizev… - arxiv preprint arxiv …, 2022 - arxiv.org
Recent studies have shown that language models pretrained and/or fine-tuned on randomly
permuted sentences exhibit competitive performance on GLUE, putting into question the …

Multilingual Nonce Dependency Treebanks: Understanding how Language Models Represent and Process Syntactic Structure

D Arps, L Kallmeyer, Y Samih, H Sajjad - arxiv preprint arxiv:2311.07497, 2023 - arxiv.org
We introduce SPUD (Semantically Perturbed Universal Dependencies), a framework for
creating nonce treebanks for the multilingual Universal Dependencies (UD) corpora. SPUD …

Linguistic Structure Induction from Language Models

O Momen - arxiv preprint arxiv:2403.09714, 2024 - arxiv.org
Linear sequences of words are implicitly represented in our brains by hierarchical structures
that organize the composition of words in sentences. Linguists formalize different …

Seeing Syntax: Uncovering Syntactic Learning Limitations in Vision-Language Models

SH Dumpala, D Arps, S Oore, L Kallmeyer… - arxiv preprint arxiv …, 2024 - arxiv.org
Vision-language models (VLMs), serve as foundation models for multi-modal applications
such as image captioning and text-to-image generation. Recent studies have highlighted …

The emergence of grammatical structure from inter-predictability

J Mansfield, C Kemp - 2023 - osf.io
Recent research has shown that words or morphemes that are closer to each other in linear
order tend to have higher statistical inter-predictability, measured as mutual information. We …