From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai

M Nauta, J Trienes, S Pathak, E Nguyen… - ACM Computing …, 2023 - dl.acm.org
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …

Delivering trustworthy AI through formal XAI

J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …

On tackling explanation redundancy in decision trees

Y Izza, A Ignatiev, J Marques-Silva - Journal of Artificial Intelligence …, 2022 - jair.org
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …

Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

On the failings of Shapley values for explainability

X Huang, J Marques-Silva - International Journal of Approximate …, 2024 - Elsevier
Abstract Explainable Artificial Intelligence (XAI) is widely considered to be critical for building
trust into the deployment of systems that integrate the use of machine learning (ML) models …

The inadequacy of shapley values for explainability

X Huang, J Marques-Silva - arxiv preprint arxiv:2302.08160, 2023 - arxiv.org
This paper develops a rigorous argument for why the use of Shapley values in explainable
AI (XAI) will necessarily yield provably misleading information about the relative importance …

Model interpretability through the lens of computational complexity

P Barceló, M Monet, J Pérez… - Advances in neural …, 2020 - proceedings.neurips.cc
In spite of several claims stating that some models are more interpretable than others--eg,"
linear models are more interpretable than deep neural networks"--we still lack a principled …

On computing probabilistic explanations for decision trees

M Arenas, P Barceló, M Romero Orth… - Advances in …, 2022 - proceedings.neurips.cc
Formal XAI (explainable AI) is a growing area that focuses on computing explanations with
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …

On explaining random forests with SAT

Y Izza, J Marques-Silva - arxiv preprint arxiv:2105.10278, 2021 - arxiv.org
Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers.
Even though RFs are not interpretable, there are no dedicated non-heuristic approaches for …

Using MaxSAT for efficient explanations of tree ensembles

A Ignatiev, Y Izza, PJ Stuckey… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer
guarantees of interpretability, that represent a challenge from the perspective of explainable …