Visual explanations via iterated integrated attributions
Abstract We introduce Iterated Integrated Attributions (IIA)-a generic method for explaining
the predictions of vision models. IIA employs iterative integration across the input image, the …
the predictions of vision models. IIA employs iterative integration across the input image, the …
Grad-sam: Explaining transformers via gradient self-attention maps
Transformer-based language models significantly advanced the state-of-the-art in many
linguistic tasks. As this revolution continues, the ability to explain model predictions has …
linguistic tasks. As this revolution continues, the ability to explain model predictions has …
Deep integrated explanations
This paper presents Deep Integrated Explanations (DIX)-a universal method for explaining
vision models. DIX generates explanation maps by integrating information from the …
vision models. DIX generates explanation maps by integrating information from the …
Modeling users' heterogeneous taste with diversified attentive user profiles
Two important challenges in recommender systems are modeling users with heterogeneous
taste and providing explainable recommendations. In order to improve our understanding of …
taste and providing explainable recommendations. In order to improve our understanding of …
Toward explainable artificial intelligence: A survey and overview on their intrinsic properties
JX Mi, X Jiang, L Luo, Y Gao - Neurocomputing, 2024 - Elsevier
Artificial intelligence and its derivative technologies are not only playing a role in the fields of
medicine, economy, policing, transportation, and natural science computing today but also …
medicine, economy, policing, transportation, and natural science computing today but also …
Interpreting bert-based text similarity via activation and saliency maps
Recently, there has been growing interest in the ability of Transformer-based models to
produce meaningful embeddings of text with several applications, such as text similarity …
produce meaningful embeddings of text with several applications, such as text similarity …
Learning to explain: A model-agnostic framework for explaining black box models
We present Learning to Explain (LTX), a model-agnostic framework designed for providing
post-hoc explanations for vision models. The LTX framework introduces an “explainer” …
post-hoc explanations for vision models. The LTX framework introduces an “explainer” …
A Counterfactual Framework for Learning and Evaluating Explanations for Recommender Systems
In the field of recommender systems, explainability remains a pivotal yet challenging aspect.
To address this, we introduce the Learning to eXplain Recommendations (LXR) framework …
To address this, we introduce the Learning to eXplain Recommendations (LXR) framework …
Stochastic integrated explanations for vision models
We introduce Stochastic Integrated Explanations (SIX)-a general method for explaining
predictions made by vision models. SIX employs stochastic integration on the internal …
predictions made by vision models. SIX employs stochastic integration on the internal …
POEM: Pattern-oriented explanations of convolutional neural networks
V Dadvar, L Golab, D Srivastava - Proceedings of the VLDB Endowment, 2023 - dl.acm.org
Convolutional Neural Networks (CNNs) are commonly used in computer vision. However,
their predictions are difficult to explain, as is the case with many deep learning models. To …
their predictions are difficult to explain, as is the case with many deep learning models. To …