Toward transparent ai: A survey on interpreting the inner structures of deep neural networks
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Does a neural network really encode symbolic concepts?
Recently, a series of studies have tried to extract interactions between input variables
modeled by a DNN and define such interactions as concepts encoded by the DNN …
modeled by a DNN and define such interactions as concepts encoded by the DNN …
Interpretability of neural networks based on game-theoretic interactions
This paper introduces the system of game-theoretic interactions, which connects both the
explanation of knowledge encoded in a deep neural networks (DNN) and the explanation of …
explanation of knowledge encoded in a deep neural networks (DNN) and the explanation of …
Unifying fourteen post-hoc attribution methods with taylor interactions
Various attribution methods have been developed to explain deep neural networks (DNNs)
by inferring the attribution/importance/contribution score of each input variable to the final …
by inferring the attribution/importance/contribution score of each input variable to the final …
Understanding and unifying fourteen attribution methods with taylor interactions
Various attribution methods have been developed to explain deep neural networks (DNNs)
by inferring the attribution/importance/contribution score of each input variable to the final …
by inferring the attribution/importance/contribution score of each input variable to the final …
Concept-level explanation for the generalization of a dnn
This paper explains the generalization power of a deep neural network (DNN) from the
perspective of interactive concepts. Many recent studies have quantified a clear emergence …
perspective of interactive concepts. Many recent studies have quantified a clear emergence …
Can we faithfully represent masked states to compute shapley values on a dnn?
Masking some input variables of a deep neural network (DNN) and computing output
changes on the masked input sample represent a typical way to compute attributions of input …
changes on the masked input sample represent a typical way to compute attributions of input …
Deep Neural Network Explainability Enhancement via Causality-Erasing SHAP Method for SAR Target Recognition
Z Cui, Z Yang, Z Zhou, L Mou, K Tang… - … on Geoscience and …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNN) s have shown remarkable effectiveness in synthetic aperture
radar (SAR) target recognition. However, the explainability problem for DNNs remains …
radar (SAR) target recognition. However, the explainability problem for DNNs remains …
Practical Diagnostic Tools for Deep Neural Networks
S Casper - 2024 - dspace.mit.edu
The most common way to evaluate AI systems is by analyzing their performance on a test
set. However, test sets can fail to identify some problems (such as out-of-distribution failures) …
set. However, test sets can fail to identify some problems (such as out-of-distribution failures) …
Evaluation of Attribution Explanations without Ground Truth
This paper proposes a metric to evaluate the objectiveness of explanation methods of neural
networks, ie, the accuracy of the estimated importance/attribution/saliency values of input …
networks, ie, the accuracy of the estimated importance/attribution/saliency values of input …