From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai

M Nauta, J Trienes, S Pathak, E Nguyen… - ACM Computing …, 2023 - dl.acm.org
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …

A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability

E Dai, T Zhao, H Zhu, J Xu, Z Guo, H Liu, J Tang… - Machine Intelligence …, 2024 - Springer
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …

A survey on automated fact-checking

Z Guo, M Schlichtkrull, A Vlachos - Transactions of the Association for …, 2022 - direct.mit.edu
Fact-checking has become increasingly important due to the speed with which both
information and misinformation can spread in the modern media ecosystem. Therefore …

[HTML][HTML] Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

A Holzinger, M Dehmer, F Emmert-Streib, R Cucchiara… - Information …, 2022 - Elsevier
Medical artificial intelligence (AI) systems have been remarkably successful, even
outperforming human performance at certain tasks. There is no doubt that AI is important to …

Measuring association between labels and free-text rationales

S Wiegreffe, A Marasović, NA Smith - arxiv preprint arxiv:2010.12762, 2020 - arxiv.org
In interpretable NLP, we require faithful rationales that reflect the model's decision-making
process for an explained instance. While prior work focuses on extractive rationales (a …

Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

Using sequences of life-events to predict human lives

G Savcisens, T Eliassi-Rad, LK Hansen… - Nature Computational …, 2024 - nature.com
Here we represent human lives in a way that shares structural similarity to language, and we
exploit this similarity to adapt natural language processing techniques to examine the …

Attcat: Explaining transformers via attentive class activation tokens

Y Qiang, D Pan, C Li, X Li, R Jang… - Advances in neural …, 2022 - proceedings.neurips.cc
Transformers have improved the state-of-the-art in various natural language processing and
computer vision tasks. However, the success of the Transformer model has not yet been duly …

Faithfulness tests for natural language explanations

P Atanasova, OM Camburu, C Lioma… - arxiv preprint arxiv …, 2023 - arxiv.org
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …

Rethinking attention-model explainability through faithfulness violation test

Y Liu, H Li, Y Guo, C Kong, J Li… - … on Machine Learning, 2022 - proceedings.mlr.press
Attention mechanisms are dominating the explainability of deep models. They produce
probability distributions over the input, which are widely deemed as feature-importance …