Παρακολούθηση
Valentin Hofmann
Valentin Hofmann
Allen Institute for AI & University of Washington
Η διεύθυνση ηλεκτρονικού ταχυδρομείου έχει επαληθευτεί στον τομέα allenai.org - Αρχική σελίδα
Τίτλος
Παρατίθεται από
Παρατίθεται από
Έτος
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
ACL - 🏆 Best Resource Paper Award, 2024
160*2024
AI Generates Covertly Racist Decisions about People Based on Their Dialect
V Hofmann, PR Kalluri, D Jurafsky, S King
Nature, 2024
93*2024
Dynamic Contextualized Word Embeddings
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
792021
Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
77*2021
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
P Röttger*, V Hofmann*, V Pyatkin, M Hinck, HR Kirk, H Schütze, D Hovy
ACL - 🏆 Outstanding Paper Award, 2024
532024
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers
V Hofmann, H Schütze, J Pierrehumbert
ACL, 2022
412022
DagoBERT: Generating Derivational Morphology with a Pretrained Language Model
V Hofmann, JB Pierrehumbert, H Schütze
EMNLP, 2020
352020
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative
L Weissweiler, V Hofmann, A Köksal, H Schütze
EMNLP, 2022
322022
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model
L Weissweiler*, V Hofmann*, A Kantharuban, A Cai, R Dutt, A Hengle, ...
EMNLP, 2023
262023
Paloma: A Benchmark for Evaluating Language Model Fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
NeurIPS, 2024
23*2024
Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity
V Hofmann, X Dong, J Pierrehumbert, H Schütze
NAACL Findings, 2022
19*2022
Graph-Enhanced Large Language Models in Asynchronous Plan Reasoning
F Lin, E La Malfa, V Hofmann, EM Yang, A Cohn, JB Pierrehumbert
ICML, 2024
182024
The Reddit Politosphere: A Large-Scale Text and Network Resource of Online Political Discourse
V Hofmann, H Schütze, JB Pierrehumbert
ICWSM, 2022
182022
Geographic Adaptation of Pretrained Language Models
V Hofmann, G Glavaš, N Ljubešić, JB Pierrehumbert, H Schütze
TACL, 2024
122024
A Graph Auto-Encoder Model of Derivational Morphology
V Hofmann, H Schütze, JB Pierrehumbert
ACL, 2020
122020
Predicting the Growth of Morphological Families from Social and Linguistic Factors
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2020
112020
CaMEL: Case Marker Extraction without Labels
L Weissweiler, V Hofmann, MJ Sabet, H Schütze
ACL, 2022
42022
Explaining Pretrained Language Models' Understanding of Linguistic Structures Using Construction Grammar
L Weissweiler, V Hofmann, A Köksal, H Schütze
Frontiers in Artificial Intelligence, 2023
32023
MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization
O Ahia, S Kumar, H Gonen, V Hofmann, T Limisiewicz, Y Tsvetkov, ...
NeurIPS, 2024
22024
Derivational Morphology Reveals Analogical Generalization in Large Language Models
V Hofmann, L Weissweiler, D Mortensen, H Schütze, J Pierrehumbert
arXiv:2411.07990, 2024
22024
Δεν είναι δυνατή η εκτέλεση της ενέργειας από το σύστημα αυτή τη στιγμή. Προσπαθήστε ξανά αργότερα.
Άρθρα 1–20