Delivering trustworthy AI through formal XAI

J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …

On tackling explanation redundancy in decision trees

Y Izza, A Ignatiev, J Marques-Silva - Journal of Artificial Intelligence …, 2022 - jair.org
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …

Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

On the failings of Shapley values for explainability

X Huang, J Marques-Silva - International Journal of Approximate …, 2024 - Elsevier
Abstract Explainable Artificial Intelligence (XAI) is widely considered to be critical for building
trust into the deployment of systems that integrate the use of machine learning (ML) models …

On computing probabilistic explanations for decision trees

M Arenas, P Barceló, M Romero Orth… - Advances in …, 2022 - proceedings.neurips.cc
Formal XAI (explainable AI) is a growing area that focuses on computing explanations with
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …

Logic for explainable AI

A Darwiche - 2023 38th Annual ACM/IEEE Symposium on …, 2023 - ieeexplore.ieee.org
A central quest in explainable AI relates to understanding the decisions made by (learned)
classifiers. There are three dimensions of this understanding that have been receiving …

Tractable explanations for d-DNNF classifiers

X Huang, Y Izza, A Ignatiev, M Cooper… - Proceedings of the …, 2022 - ojs.aaai.org
Compilation into propositional languages finds a growing number of practical uses,
including in constraint programming, diagnosis and machine learning (ML), among others …

VeriX: towards verified explainability of deep neural networks

M Wu, H Wu, C Barrett - Advances in neural information …, 2023 - proceedings.neurips.cc
Abstract We present VeriX (Verified eXplainability), a system for producing optimal robust
explanations and generating counterfactuals along decision boundaries of machine …

Axiomatic aggregations of abductive explanations

G Biradar, Y Izza, E Lobo, V Viswanathan… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
The recent criticisms of the robustness of post hoc model approximation explanation
methods (like LIME and SHAP) have led to the rise of model-precise abductive explanations …

On computing probabilistic abductive explanations

Y Izza, X Huang, A Ignatiev, N Narodytska… - International Journal of …, 2023 - Elsevier
The most widely studied explainable AI (XAI) approaches are unsound. This is the case with
well-known model-agnostic explanation approaches, and it is also the case with approaches …