Improving the faithfulness of attention-based explanations with task-specific information for text classification G Chrysostomou, N Aletras arXiv preprint arXiv:2105.02657, 2021 | 46 | 2021 |
Frustratingly simple pretraining alternatives to masked language modeling A Yamaguchi, G Chrysostomou, K Margatina, N Aletras arXiv preprint arXiv:2109.01819, 2021 | 30 | 2021 |
An empirical study on explanations in out-of-domain settings G Chrysostomou, N Aletras arXiv preprint arXiv:2203.00056, 2022 | 21 | 2022 |
Enjoy the salience: Towards better transformer-based faithful explanations with word salience G Chrysostomou, N Aletras arXiv preprint arXiv:2108.13759, 2021 | 18 | 2021 |
Flexible instance-specific rationalization of nlp models G Chrysostomou, N Aletras Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 10545 …, 2022 | 15 | 2022 |
On the impact of temporal concept drift on model explanations Z Zhao, G Chrysostomou, K Bontcheva, N Aletras arXiv preprint arXiv:2210.09197, 2022 | 11 | 2022 |
Investigating hallucinations in pruned large language models for abstractive summarization G Chrysostomou, Z Zhao, M Williams, N Aletras Transactions of the Association for Computational Linguistics 12, 1163-1181, 2024 | 5 | 2024 |
Variable instance-level explainability for text classification G Chrysostomou, N Aletras arXiv, 2021 | 4 | 2021 |
Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization G Chrysostomou, Z Zhao, M Williams, N Aletras arXiv preprint arXiv:2311.09335, 2023 | 1 | 2023 |
Self-calibration for Language Model Quantization and Pruning M Williams, G Chrysostomou, N Aletras arXiv preprint arXiv:2410.17170, 2024 | | 2024 |
Explainable Natural Language Processing G Chrysostomou Computational Linguistics 48 (4), 1137-1139, 2022 | | 2022 |
Model Interpretability for Natural Language Processing Applications G Chrysostomou University of Sheffield, 2022 | | 2022 |