From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai

M Nauta, J Trienes, S Pathak, E Nguyen… - ACM Computing …, 2023 - dl.acm.org
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …

A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability

E Dai, T Zhao, H Zhu, J Xu, Z Guo, H Liu, J Tang… - Machine Intelligence …, 2024 - Springer
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …

A survey on automated fact-checking

Z Guo, M Schlichtkrull, A Vlachos - Transactions of the Association for …, 2022 - direct.mit.edu
Fact-checking has become increasingly important due to the speed with which both
information and misinformation can spread in the modern media ecosystem. Therefore …

[PDF][PDF] Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

[HTML][HTML] Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

A Holzinger, M Dehmer, F Emmert-Streib, R Cucchiara… - Information …, 2022 - Elsevier
Medical artificial intelligence (AI) systems have been remarkably successful, even
outperforming human performance at certain tasks. There is no doubt that AI is important to …

Foresight—a generative pretrained transformer for modelling of patient timelines using electronic health records: a retrospective modelling study

Z Kraljevic, D Bean, A Shek, R Bendayan… - The Lancet Digital …, 2024 - thelancet.com
Background An electronic health record (EHR) holds detailed longitudinal information about
a patient's health status and general clinical history, a large portion of which is stored as …

Using sequences of life-events to predict human lives

G Savcisens, T Eliassi-Rad, LK Hansen… - Nature Computational …, 2024 - nature.com
Here we represent human lives in a way that shares structural similarity to language, and we
exploit this similarity to adapt natural language processing techniques to examine the …

Measuring association between labels and free-text rationales

S Wiegreffe, A Marasović, NA Smith - arxiv preprint arxiv:2010.12762, 2020 - arxiv.org
In interpretable NLP, we require faithful rationales that reflect the model's decision-making
process for an explained instance. While prior work focuses on extractive rationales (a …

Faithfulness tests for natural language explanations

P Atanasova, OM Camburu, C Lioma… - arxiv preprint arxiv …, 2023 - arxiv.org
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …

From understanding to utilization: A survey on explainability for large language models

H Luo, L Specia - arxiv preprint arxiv:2401.12874, 2024 - arxiv.org
Explainability for Large Language Models (LLMs) is a critical yet challenging aspect of
natural language processing. As LLMs are increasingly integral to diverse applications …