From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …
their great ability in modeling graph-structured data, GNNs are vastly used in various …
A survey on automated fact-checking
Fact-checking has become increasingly important due to the speed with which both
information and misinformation can spread in the modern media ecosystem. Therefore …
information and misinformation can spread in the modern media ecosystem. Therefore …
[HTML][HTML] Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence
Medical artificial intelligence (AI) systems have been remarkably successful, even
outperforming human performance at certain tasks. There is no doubt that AI is important to …
outperforming human performance at certain tasks. There is no doubt that AI is important to …
Measuring association between labels and free-text rationales
In interpretable NLP, we require faithful rationales that reflect the model's decision-making
process for an explained instance. While prior work focuses on extractive rationales (a …
process for an explained instance. While prior work focuses on extractive rationales (a …
Towards faithful model explanation in nlp: A survey
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …
understand. This has given rise to numerous efforts towards model explainability in recent …
Using sequences of life-events to predict human lives
Here we represent human lives in a way that shares structural similarity to language, and we
exploit this similarity to adapt natural language processing techniques to examine the …
exploit this similarity to adapt natural language processing techniques to examine the …
Attcat: Explaining transformers via attentive class activation tokens
Transformers have improved the state-of-the-art in various natural language processing and
computer vision tasks. However, the success of the Transformer model has not yet been duly …
computer vision tasks. However, the success of the Transformer model has not yet been duly …
Faithfulness tests for natural language explanations
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …
predictions. However, recent work shows that current methods giving explanations such as …
Rethinking attention-model explainability through faithfulness violation test
Attention mechanisms are dominating the explainability of deep models. They produce
probability distributions over the input, which are widely deemed as feature-importance …
probability distributions over the input, which are widely deemed as feature-importance …