Segui
Joe Stacey
Joe Stacey
Email verificata su imperial.ac.uk
Titolo
Citata da
Citata da
Anno
Supervising Model Attention with Human Explanations for Robust Natural Language Inference
J Stacey, Y Belinkov, M Rei
AAAI 2022, 2022
67*2022
Avoiding the hypothesis-only bias in natural language inference via ensemble adversarial training
J Stacey, P Minervini, H Dubossarsky, S Riedel, T Rocktäschel
EMNLP 2020, 2020
54*2020
Logical Reasoning with Span-Level Predictions for Interpretable and Robust NLI Models
J Stacey, P Minervini, H Dubossarsky, M Rei
EMNLP 2022, 2022
19*2022
Atomic Inference for NLI with Generated Facts as Atoms
J Stacey, P Minervini, H Dubossarsky, OM Camburu, M Rei
EMNLP 2024, 2023
8*2023
When and Why Does Bias Mitigation Work?
A Ravichander, J Stacey, M Rei
Findings of the Association for Computational Linguistics: EMNLP 2023, 9233-9247, 2023
42023
Distilling Robustness into Natural Language Inference Models with Domain-Targeted Augmentation
J Stacey, M Rei
ACL Findings 2024, 2023
4*2023
LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues
J Stacey, J Cheng, J Torr, T Guigue, J Driesen, A Coca, M Gaynor, ...
NAACL SRW 2024, 2024
12024
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–7