Dissociating language and thought in large language models
Large language models (LLMs) have come closest among all models to date to mastering
human language, yet opinions about their linguistic and cognitive capabilities remain split …
human language, yet opinions about their linguistic and cognitive capabilities remain split …
Syntactic structure from deep learning
Modern deep neural networks achieve impressive performance in engineering applications
that require extensive linguistic skills, such as machine translation. This success has …
that require extensive linguistic skills, such as machine translation. This success has …
Gpt-4 passes the bar exam
In this paper, we experimentally evaluate the zero-shot performance of GPT-4 against prior
generations of GPT on the entire uniform bar examination (UBE), including not only the …
generations of GPT on the entire uniform bar examination (UBE), including not only the …
[HTML][HTML] Modern language models refute Chomsky's approach to language
ST Piantadosi - From fieldwork to linguistic theory: A tribute to …, 2023 - books.google.com
Modern machine learning has subverted and bypassed the theoretical framework of
Chomsky's generative approach to linguistics, including its core claims to particular insights …
Chomsky's generative approach to linguistics, including its core claims to particular insights …
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
A Ettinger - Transactions of the Association for Computational …, 2020 - direct.mit.edu
Pre-training by language modeling has become a popular and successful approach to NLP
tasks, but we have yet to understand exactly what linguistic capacities these pre-training …
tasks, but we have yet to understand exactly what linguistic capacities these pre-training …
[PDF][PDF] Linguistic Knowledge and Transferability of Contextual Representations
NF Liu - arxiv preprint arxiv:1903.08855, 2019 - fq.pkwyx.com
Contextual word representations derived from large-scale neural language models are
successful across a diverse set of NLP tasks, suggesting that they encode useful and …
successful across a diverse set of NLP tasks, suggesting that they encode useful and …
[PDF][PDF] Neural Network Acceptability Judgments
A Warstadt - arxiv preprint arxiv:1805.12471, 2019 - alexwarstadt.github.io
This paper investigates the ability of artificial neural networks to judge the grammatical
acceptability of a sentence, with the goal of testing their linguistic competence. We introduce …
acceptability of a sentence, with the goal of testing their linguistic competence. We introduce …
BLiMP: The benchmark of linguistic minimal pairs for English
Abstract We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP), a challenge set
for evaluating the linguistic knowledge of language models (LMs) on major grammatical …
for evaluating the linguistic knowledge of language models (LMs) on major grammatical …
What artificial neural networks can tell us about human language acquisition
Rapid progress in machine learning for natural language processing has the potential to
transform debates about how humans learn language. However, the learning environments …
transform debates about how humans learn language. However, the learning environments …
Using computational models to test syntactic learnability
We studied the learnability of English filler-gap dependencies and the “island” constraints on
them by assessing the generalizations made by autoregressive (incremental) language …
them by assessing the generalizations made by autoregressive (incremental) language …