Interpretability research of deep learning: A literature survey
B Xua, G Yang - Information Fusion, 2024 - Elsevier
Deep learning (DL) has been widely used in various fields. However, its black-box nature
limits people's understanding and trust in its decision-making process. Therefore, it becomes …
limits people's understanding and trust in its decision-making process. Therefore, it becomes …
Evaluating post-hoc explanations for graph neural networks via robustness analysis
This work studies the evaluation of explaining graph neural networks (GNNs), which is
crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation …
crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation …
Craft: Concept recursive activation factorization for explainability
Attribution methods are a popular class of explainability methods that use heatmaps to
depict the most important areas of an image that drive a model decision. Nevertheless …
depict the most important areas of an image that drive a model decision. Nevertheless …
Harmonizing the object recognition strategies of deep neural networks with humans
The many successes of deep neural networks (DNNs) over the past decade have largely
been driven by computational scale rather than insights from biological intelligence. Here …
been driven by computational scale rather than insights from biological intelligence. Here …
What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods
A multitude of explainability methods has been described to try to help users better
understand how modern AI systems make decisions. However, most performance metrics …
understand how modern AI systems make decisions. However, most performance metrics …
A holistic approach to unifying automatic concept extraction and concept importance estimation
In recent years, concept-based approaches have emerged as some of the most promising
explainability methods to help us interpret the decisions of Artificial Neural Networks (ANNs) …
explainability methods to help us interpret the decisions of Artificial Neural Networks (ANNs) …
Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning
Explaining predictions of black-box neural networks is crucial when applied to decision-
critical tasks. Thus, attribution maps are commonly used to identify important image regions …
critical tasks. Thus, attribution maps are commonly used to identify important image regions …
Xplique: A deep learning explainability toolbox
Today's most advanced machine-learning models are hardly scrutable. The key challenge
for explainability methods is to help assisting researchers in opening up these black boxes …
for explainability methods is to help assisting researchers in opening up these black boxes …
[PDF][PDF] Formally Explaining Neural Networks within Reactive Systems
Deep neural networks (DNNs) are increasingly being used as controllers in reactive
systems. However, DNNs are highly opaque, which renders it difficult to explain and justify …
systems. However, DNNs are highly opaque, which renders it difficult to explain and justify …
Manifold-based shapley explanations for high dimensional correlated features
Explainable artificial intelligence (XAI) holds significant importance in enhancing the
reliability and transparency of network decision-making. SHapley Additive exPlanations …
reliability and transparency of network decision-making. SHapley Additive exPlanations …