Explainability of deep vision-based autonomous driving systems: Review and challenges
This survey reviews explainability methods for vision-based self-driving systems trained with
behavior cloning. The concept of explainability has several facets and the need for …
behavior cloning. The concept of explainability has several facets and the need for …
Leveraging explanations in interactive machine learning: An overview
Explanations have gained an increasing level of interest in the AI and Machine Learning
(ML) communities in order to improve model transparency and allow users to form a mental …
(ML) communities in order to improve model transparency and allow users to form a mental …
Does the whole exceed its parts? the effect of ai explanations on complementary team performance
Many researchers motivate explainable AI with studies showing that human-AI team
performance on decision-making tasks improves when the AI explains its recommendations …
performance on decision-making tasks improves when the AI explains its recommendations …
Counterfactual visual explanations
In this work, we develop a technique to produce counterfactual visual explanations. Given a
'query'image $ I $ for which a vision system predicts class $ c $, a counterfactual visual …
'query'image $ I $ for which a vision system predicts class $ c $, a counterfactual visual …
Large scale fine-grained categorization and domain-specific transfer learning
Transferring the knowledge learned from large scale datasets (eg, ImageNet) via fine-tuning
offers an effective solution for domain-specific fine-grained visual categorization (FGVC) …
offers an effective solution for domain-specific fine-grained visual categorization (FGVC) …
On human predictions with explanations and predictions of machine learning models: A case study on deception detection
Humans are the final decision makers in critical tasks that involve ethical and legal
concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake …
concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake …
Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning
Structural data well exists in Web applications, such as social networks in social media,
citation networks in academic websites, and threads data in online forums. Due to the …
citation networks in academic websites, and threads data in online forums. Due to the …
What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods
A multitude of explainability methods has been described to try to help users better
understand how modern AI systems make decisions. However, most performance metrics …
understand how modern AI systems make decisions. However, most performance metrics …
Do models explain themselves? counterfactual simulatability of natural language explanations
Large language models (LLMs) are trained to imitate humans to explain human decisions.
However, do LLMs explain themselves? Can they help humans build mental models of how …
However, do LLMs explain themselves? Can they help humans build mental models of how …
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Explaining the decisions of an Artificial Intelligence (AI) model is increasingly critical in many
real-world, high-stake applications. Hundreds of papers have either proposed new feature …
real-world, high-stake applications. Hundreds of papers have either proposed new feature …