Follow
James A. Michaelov
Title
Cited by
Cited by
Year
Do Large Language Models know what humans know?
S Trott, C Jones, T Chang, J Michaelov, B Bergen
Cognitive Science 47 (7), e13309, 2023
1012023
So cloze yet so far: N400 amplitude is better predicted by distributional information than human predictability judgements
JA Michaelov, S Coulson, BK Bergen
IEEE Transactions on Cognitive and Developmental Systems 15 (3), 1033-1042, 2022
552022
How well does surprisal explain N400 amplitude under different experimental conditions?
JA Michaelov, BK Bergen
Proceedings of the 24th Conference on Computational Natural Language …, 2020
492020
Strong Prediction: Language model surprisal explains multiple N400 effects
JA Michaelov, MD Bardolph, CK Van Petten, BK Bergen, S Coulson
Neurobiology of language 5 (1), 107-135, 2024
40*2024
Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
JA Michaelov, MD Bardolph, S Coulson, BK Bergen
Proceedings of the Annual Meeting of the Cognitive Science Society 43, 2021
322021
Distrubutional Semantics Still Can't Account for Affordances
CR Jones, TA Chang, S Coulson, JA Michaelov, S Trott, B Bergen
Proceedings of the Annual Meeting of the Cognitive Science Society 44, 2022
262022
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
JA Michaelov, C Arnett, TA Chang, BK Bergen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
162023
Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers
JA Michaelov, BK Bergen
Findings of the Association for Computational Linguistics: ACL 2023, 2023
142023
Collateral facilitation in humans and language models
JA Michaelov, BK Bergen
Proceedings of the 26th Conference on Computational Natural Language …, 2022
142022
Measuring Sentence Information via Surprisal: Theoretical and Clinical Implications in Nonfluent Aphasia
N Rezaii, J Michaelov, S Josephy‐Hernandez, B Ren, D Hochberg, ...
Annals of Neurology 94 (4), 647-657, 2023
13*2023
Can Peanuts Fall in Love with Distributional Semantics?
JA Michaelov, S Coulson, BK Bergen
Proceedings of the Annual Meeting of the Cognitive Science Society 45, 2023
11*2023
The more human-like the language model, the more surprisal is the best predictor of N400 amplitude
J Michaelov, B Bergen
NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems, 2022
72022
Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
JA Michaelov, BK Bergen
Proceedings of the 29th International Conference on Computational …, 2022
52022
Ignoring the alternatives: The N400 is sensitive to stimulus preactivation alone
JA Michaelov, BK Bergen
Cortex 168, 82-101, 2023
32023
The Young and the Old: (t) Release in Elderspeak
J Michaelov
Lifespans and Styles 3 (1), 2-9, 2017
32017
Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models
C Arnett, TA Chang, JA Michaelov, BK Bergen
The 3rd Multilingual Representation Learning Workshop, 2023
22023
Do large language models know what humans know? ArXiv
S Trott, C Jones, T Chang, J Michaelov, B Bergen
arXiv preprint arXiv:2209.01515, 2022
22022
On the Mathematical Relationship Between Contextual Probability and N400 Amplitude
JA Michaelov, BK Bergen
Open Mind 8, 859-897, 2024
12024
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
JA Michaelov, C Arnett, BK Bergen
First Conference on Language Modeling, 2024
12024
Emergent inabilities? Inverse scaling over the course of pretraining
JA Michaelov, BK Bergen
Findings of the Association for Computational Linguistics: EMNLP 2023, 14607 …, 2023
12023
The system can't perform the operation now. Try again later.
Articles 1–20