Explainability of deep vision-based autonomous driving systems: Review and challenges

É Zablocki, H Ben-Younes, P Pérez, M Cord - International Journal of …, 2022 - Springer
This survey reviews explainability methods for vision-based self-driving systems trained with
behavior cloning. The concept of explainability has several facets and the need for …

Leveraging explanations in interactive machine learning: An overview

S Teso, Ö Alkan, W Stammer, E Daly - Frontiers in Artificial …, 2023 - frontiersin.org
Explanations have gained an increasing level of interest in the AI and Machine Learning
(ML) communities in order to improve model transparency and allow users to form a mental …

Does the whole exceed its parts? the effect of ai explanations on complementary team performance

G Bansal, T Wu, J Zhou, R Fok, B Nushi… - Proceedings of the …, 2021 - dl.acm.org
Many researchers motivate explainable AI with studies showing that human-AI team
performance on decision-making tasks improves when the AI explains its recommendations …

Counterfactual visual explanations

Y Goyal, Z Wu, J Ernst, D Batra… - … on Machine Learning, 2019 - proceedings.mlr.press
In this work, we develop a technique to produce counterfactual visual explanations. Given a
'query'image $ I $ for which a vision system predicts class $ c $, a counterfactual visual …

Large scale fine-grained categorization and domain-specific transfer learning

Y Cui, Y Song, C Sun, A Howard… - Proceedings of the …, 2018 - openaccess.thecvf.com
Transferring the knowledge learned from large scale datasets (eg, ImageNet) via fine-tuning
offers an effective solution for domain-specific fine-grained visual categorization (FGVC) …

On human predictions with explanations and predictions of machine learning models: A case study on deception detection

V Lai, C Tan - Proceedings of the conference on fairness …, 2019 - dl.acm.org
Humans are the final decision makers in critical tasks that involve ethical and legal
concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake …

Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning

J Tan, S Geng, Z Fu, Y Ge, S Xu, Y Li… - Proceedings of the ACM …, 2022 - dl.acm.org
Structural data well exists in Web applications, such as social networks in social media,
citation networks in academic websites, and threads data in online forums. Due to the …

What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods

J Colin, T Fel, R Cadène… - Advances in neural …, 2022 - proceedings.neurips.cc
A multitude of explainability methods has been described to try to help users better
understand how modern AI systems make decisions. However, most performance metrics …

Do models explain themselves? counterfactual simulatability of natural language explanations

Y Chen, R Zhong, N Ri, C Zhao, H He… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) are trained to imitate humans to explain human decisions.
However, do LLMs explain themselves? Can they help humans build mental models of how …

The effectiveness of feature attribution methods and its correlation with automatic evaluation scores

G Nguyen, D Kim, A Nguyen - Advances in Neural …, 2021 - proceedings.neurips.cc
Explaining the decisions of an Artificial Intelligence (AI) model is increasingly critical in many
real-world, high-stake applications. Hundreds of papers have either proposed new feature …