Principles of intensive human neuroimaging

ER Kupers, T Knapen, EP Merriam, KN Kay - Trends in Neurosciences, 2024 - cell.com
The rise of large, publicly shared functional magnetic resonance imaging (fMRI) data sets in
human neuroscience has focused on acquiring either a few hours of data on many …

Driving and suppressing the human language network using large language models

G Tuckute, A Sathe, S Srikant, M Taliaferro… - Nature Human …, 2024 - nature.com
Transformer models such as GPT generate human-like language and are predictive of
human brain responses to language. Here, using functional-MRI-measured brain responses …

Scaling laws for language encoding models in fMRI

R Antonello, A Vaidya, A Huth - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Representations from transformer-based unidirectional language models are
known to be effective at predicting brain responses to natural language. However, most …

Computational language modeling and the promise of in silico experimentation

S Jain, VA Vo, L Wehbe, AG Huth - Neurobiology of Language, 2024 - direct.mit.edu
Abstract Language neuroscience currently relies on two major experimental paradigms:
controlled experiments using carefully hand-designed stimuli, and natural stimulus …

Explaining black box text modules in natural language with language models

C Singh, AR Hsu, R Antonello, S Jain, AG Huth… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) have demonstrated remarkable prediction performance for a
growing array of tasks. However, their rapid proliferation and increasing opaqueness have …

Information-restricted neural language models reveal different brain regions' sensitivity to semantics, syntax, and context

A Pasquiou, Y Lakretz, B Thirion… - Neurobiology of …, 2023 - direct.mit.edu
A fundamental question in neurolinguistics concerns the brain regions involved in syntactic
and semantic processing during speech comprehension, both at the lexical (word …

Shared functional specialization in transformer-based language models and the human brain

S Kumar, TR Sumers, T Yamakoshi, A Goldstein… - Nature …, 2024 - nature.com
When processing language, the brain is thought to deploy specialized computations to
construct meaning from complex linguistic structures. Recently, artificial neural networks …

Language generation from human brain activities

Z Ye, Q Ai, Y Liu, M Zhang, C Lioma… - arxiv preprint arxiv …, 2023 - arxiv.org
Generating human language through non-invasive brain-computer interfaces (BCIs) has the
potential to unlock many applications, such as serving disabled patients and improving …

Vector-ICL: In-context Learning with Continuous Vector Representations

Y Zhuang, C Singh, L Liu, J Shang, J Gao - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have shown remarkable in-context learning (ICL)
capabilities on textual data. We explore whether these capabilities can be extended to …

Augmenting interpretable models with llms during training

C Singh, A Askari, R Caruana, J Gao - arxiv preprint arxiv:2209.11799, 2022 - arxiv.org
Recent large language models (LLMs) have demonstrated remarkable prediction
performance for a growing array of tasks. However, their proliferation into high-stakes …