Folgen
Valentin Hofmann
Valentin Hofmann
Allen Institute for AI & University of Washington
Bestätigte E-Mail-Adresse bei allenai.org - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
ACL - 🏆 Best Resource Paper Award, 2024
152*2024
AI Generates Covertly Racist Decisions about People Based on Their Dialect
V Hofmann, PR Kalluri, D Jurafsky, S King
Nature, 2024
88*2024
Dynamic Contextualized Word Embeddings
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
782021
Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
77*2021
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
P Röttger*, V Hofmann*, V Pyatkin, M Hinck, HR Kirk, H Schütze, D Hovy
ACL - 🏆 Outstanding Paper Award, 2024
492024
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers
V Hofmann, H Schütze, J Pierrehumbert
ACL, 2022
402022
DagoBERT: Generating Derivational Morphology with a Pretrained Language Model
V Hofmann, JB Pierrehumbert, H Schütze
EMNLP, 2020
352020
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative
L Weissweiler, V Hofmann, A Köksal, H Schütze
EMNLP, 2022
312022
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model
L Weissweiler*, V Hofmann*, A Kantharuban, A Cai, R Dutt, A Hengle, ...
EMNLP, 2023
272023
Paloma: A Benchmark for Evaluating Language Model Fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
NeurIPS, 2024
23*2024
Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity
V Hofmann, X Dong, J Pierrehumbert, H Schütze
NAACL Findings, 2022
19*2022
The Reddit Politosphere: A Large-Scale Text and Network Resource of Online Political Discourse
V Hofmann, H Schütze, JB Pierrehumbert
ICWSM, 2022
182022
Graph-Enhanced Large Language Models in Asynchronous Plan Reasoning
F Lin, E La Malfa, V Hofmann, EM Yang, A Cohn, JB Pierrehumbert
ICML, 2024
172024
A Graph Auto-Encoder Model of Derivational Morphology
V Hofmann, H Schütze, JB Pierrehumbert
ACL, 2020
122020
Geographic Adaptation of Pretrained Language Models
V Hofmann, G Glavaš, N Ljubešić, JB Pierrehumbert, H Schütze
TACL, 2024
112024
Predicting the Growth of Morphological Families from Social and Linguistic Factors
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2020
112020
Explaining Pretrained Language Models' Understanding of Linguistic Structures Using Construction Grammar
L Weissweiler, V Hofmann, A Köksal, H Schütze
Frontiers in Artificial Intelligence, 2023
32023
Derivational Morphology Reveals Analogical Generalization in Large Language Models
V Hofmann, L Weissweiler, D Mortensen, H Schütze, J Pierrehumbert
arXiv:2411.07990, 2024
22024
CaMEL: Case Marker Extraction without Labels
L Weissweiler, V Hofmann, MJ Sabet, H Schütze
ACL, 2022
22022
MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization
O Ahia, S Kumar, H Gonen, V Hofmann, T Limisiewicz, Y Tsvetkov, ...
NeurIPS, 2024
12024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20