[HTML][HTML] Language in brains, minds, and machines

G Tuckute, N Kanwisher… - Annual Review of …, 2024 - annualreviews.org
It has long been argued that only humans could produce and understand language. But
now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the …

The limitations of large language models for understanding human language and cognition

C Cuskley, R Woods, M Flaherty - Open Mind, 2024 - direct.mit.edu
Researchers have recently argued that the capabilities of Large Language Models (LLMs)
can provide new insights into longstanding debates about the role of learning and/or …

Brain-like language processing via a shallow untrained multihead attention network

B AlKhamissi, G Tuckute, A Bosselut… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have been shown to be effective models of the human
language system, with some models predicting most explainable variance of brain activity in …

Is it the end of (generative) linguistics as we know it?

C Chesi - arxiv preprint arxiv:2412.12797, 2024 - arxiv.org
A significant debate has emerged in response to a paper written by Steven Piantadosi
(Piantadosi, 2023) and uploaded to the LingBuzz platform, the open archive for generative …

A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles

EKR Lee, S Nair, N Feldman - arxiv preprint arxiv:2410.16139, 2024 - arxiv.org
We present a systematic evaluation of large language models' sensitivity to argument roles,
ie, who did what to whom, by replicating psycholinguistic studies on human argument role …

Reverse-Engineering the Reader

S Kiegeland, EG Wilcox, A Amini, DR Reich… - arxiv preprint arxiv …, 2024 - arxiv.org
Numerous previous studies have sought to determine to what extent language models,
pretrained on natural language text, can serve as useful models of human cognition. In this …

[PDF][PDF] Recurrent Networks are (Linguistically) Better? An Experiment on Small-LM Training on Child-Directed Speech in Italian

A Fusco, M Barbini, MLP Bianchessi… - CEUR WORKSHOP …, 2024 - ceur-ws.org
Here we discuss strategies and results of a small-sized training program based on Italian
childdirected speech (less than 3M tokens) for various network architectures. The rationale …

Geohumanities 2.0

M Böhlen - On the Logics of Planetary Computing - taylorfrancis.com
The last chapter discusses why smaller AI systems are becoming increasingly important in
AI. I look at existing foundation models in AI and discuss the unease many researchers …