Principles of intensive human neuroimaging
The rise of large, publicly shared functional magnetic resonance imaging (fMRI) data sets in
human neuroscience has focused on acquiring either a few hours of data on many …
human neuroscience has focused on acquiring either a few hours of data on many …
Driving and suppressing the human language network using large language models
Transformer models such as GPT generate human-like language and are predictive of
human brain responses to language. Here, using functional-MRI-measured brain responses …
human brain responses to language. Here, using functional-MRI-measured brain responses …
Scaling laws for language encoding models in fMRI
Abstract Representations from transformer-based unidirectional language models are
known to be effective at predicting brain responses to natural language. However, most …
known to be effective at predicting brain responses to natural language. However, most …
Computational language modeling and the promise of in silico experimentation
Abstract Language neuroscience currently relies on two major experimental paradigms:
controlled experiments using carefully hand-designed stimuli, and natural stimulus …
controlled experiments using carefully hand-designed stimuli, and natural stimulus …
Explaining black box text modules in natural language with language models
Large language models (LLMs) have demonstrated remarkable prediction performance for a
growing array of tasks. However, their rapid proliferation and increasing opaqueness have …
growing array of tasks. However, their rapid proliferation and increasing opaqueness have …
Information-restricted neural language models reveal different brain regions' sensitivity to semantics, syntax, and context
A fundamental question in neurolinguistics concerns the brain regions involved in syntactic
and semantic processing during speech comprehension, both at the lexical (word …
and semantic processing during speech comprehension, both at the lexical (word …
Shared functional specialization in transformer-based language models and the human brain
When processing language, the brain is thought to deploy specialized computations to
construct meaning from complex linguistic structures. Recently, artificial neural networks …
construct meaning from complex linguistic structures. Recently, artificial neural networks …
Language generation from human brain activities
Generating human language through non-invasive brain-computer interfaces (BCIs) has the
potential to unlock many applications, such as serving disabled patients and improving …
potential to unlock many applications, such as serving disabled patients and improving …
Vector-ICL: In-context Learning with Continuous Vector Representations
Large language models (LLMs) have shown remarkable in-context learning (ICL)
capabilities on textual data. We explore whether these capabilities can be extended to …
capabilities on textual data. We explore whether these capabilities can be extended to …
Augmenting interpretable models with llms during training
Recent large language models (LLMs) have demonstrated remarkable prediction
performance for a growing array of tasks. However, their proliferation into high-stakes …
performance for a growing array of tasks. However, their proliferation into high-stakes …