Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times?
BD Oh, W Schuler - Transactions of the Association for Computational …, 2023 - direct.mit.edu
This work presents a linguistic analysis into why larger Transformer-based pre-trained
language models with more parameters and lower perplexity nonetheless yield surprisal …
language models with more parameters and lower perplexity nonetheless yield surprisal …
Large-scale evidence for logarithmic effects of word predictability on reading time
During real-time language comprehension, our minds rapidly decode complex meanings
from sequences of words. The difficulty of doing so is known to be related to words' …
from sequences of words. The difficulty of doing so is known to be related to words' …
fMRI reveals language-specific predictive coding during naturalistic sentence comprehension
Much research in cognitive neuroscience supports prediction as a canonical computation of
cognition across domains. Is such predictive coding implemented by feedback from higher …
cognition across domains. Is such predictive coding implemented by feedback from higher …
Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex
To understand language, we must infer structured meanings from real-time auditory or visual
signals. Researchers have long focused on word-by-word structure building in working …
signals. Researchers have long focused on word-by-word structure building in working …
Comparison of structural parsers and neural language models as surprisal estimators
Expectation-based theories of sentence processing posit that processing difficulty is
determined by predictability in context. While predictability quantified via surprisal has …
determined by predictability in context. While predictability quantified via surprisal has …
Language model quality correlates with psychometric predictive power in multiple languages
Surprisal theory (Hale, 2001; Levy, 2008) posits that a word's reading time is proportional to
its surprisal (ie, to its negative log probability given the proceeding context). Since we are …
its surprisal (ie, to its negative log probability given the proceeding context). Since we are …
The plausibility of sampling as an algorithmic theory of sentence processing
Abstract Words that are more surprising given context take longer to process. However, no
incremental parsing algorithm has been shown to directly predict this phenomenon. In this …
incremental parsing algorithm has been shown to directly predict this phenomenon. In this …
Incremental language comprehension difficulty predicts activity in the language network but not the multiple demand network
What role do domain-general executive functions play in human language comprehension?
To address this question, we examine the relationship between behavioral measures of …
To address this question, we examine the relationship between behavioral measures of …
How reliable are standard reading time analyses? Hierarchical bootstrap reveals substantial power over-optimism and scale-dependent Type I error inflation
We investigate the statistical power and Type I error rate of the two most common
approaches to reading time (RT) analyses: assuming normality of residuals and …
approaches to reading time (RT) analyses: assuming normality of residuals and …
A synchronized multimodal neuroimaging dataset for studying brain language processing
We present a synchronized multimodal neuroimaging dataset for studying brain language
processing (SMN4Lang) that contains functional magnetic resonance imaging (fMRI) and …
processing (SMN4Lang) that contains functional magnetic resonance imaging (fMRI) and …