Obserwuj
Jaap Jumelet
Jaap Jumelet
University of Groningen
Zweryfikowany adres z uva.nl - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR, BigBench, 2022
13182022
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
J Jumelet, D Hupkes
BlackboxNLP 2018, 2018
692018
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
A Sinclair*, J Jumelet*, W Zuidema, R Fernández
TACL, 2022
502022
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
J Jumelet, W Zuidema, D Hupkes
CoNLL 2019, 2019
472019
Language Models Use Monotonicity to Assess NPI Licensing
J Jumelet, M Denić, J Szymanik, D Hupkes, S Steinert-Threlkeld
ACL Findings 2021, 2021
312021
Language Modelling as a Multi-Task Problem
L Weber, J Jumelet, E Bruni, D Hupkes
EACL 2021, 2021
182021
The Birth of Bias: A case study on the evolution of gender bias in an English language model
O Van Der Wal, J Jumelet, K Schulz, W Zuidema
GeBNLP 2022, 2022
172022
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
A Langedijk, H Mohebbi, G Sarti, W Zuidema, J Jumelet
NAACL Findings 2024, 2023
92023
Feature Interactions Reveal Linguistic Structure in Language Models
J Jumelet, W Zuidema
ACL Findings 2023, 2023
92023
diagNNose: A Library for Neural Activation Analysis
J Jumelet
BlackboxNLP 2020, 2020
92020
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
A Patil*, J Jumelet*, YY Chiu, A Lapastora, P Shen, L Wang, C Willrich, ...
TACL, 2024
72024
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution
J Jumelet, W Zuidema
EMNLP Findings 2023, 2023
72023
Transformer-specific Interpretability
H Mohebbi, J Jumelet, M Hanna, A Alishahi, W Zuidema
EACL Tutorial, 2024
62024
Curriculum learning with adam: The devil is in the wrong details
L Weber, J Jumelet, P Michel, E Bruni, D Hupkes
arXiv preprint arXiv:2308.12202, 2023
52023
Interpretability of Language Models via Task Spaces
L Weber, J Jumelet, E Bruni, D Hupkes
ACL 2024, 2024
42024
Do Language Models Exhibit Human-like Structural Priming Effects?
J Jumelet, W Zuidema, A Sinclair
ACL Findings 2024, 2024
42024
ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
J Jumelet, M Hanna, MH Kloots, A Langedijk, C Pouw, O van der Wal
BabyLM / CoNLL 2023, 2023
42023
Attention vs non-attention for a Shapley-based explanation method
T Kersten, HM Wong, J Jumelet, D Hupkes
DeeLIO 2021, 2021
42021
Black Big Boxes: Do Language Models Hide a Theory of Adjective Order?
J Jumelet, L Bylinina, W Zuidema, J Szymanik
arXiv preprint arXiv:2407.02136, 2024
32024
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
A Molnar, J Jumelet, M Giulianelli, A Sinclair
CoNLL 2023, 2023
32023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20