Delivering trustworthy AI through formal XAI

J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …

Towards trustable explainable AI

A Ignatiev - … Joint Conference on Artificial Intelligence-Pacific …, 2020 - research.monash.edu
Explainable artificial intelligence (XAI) represents arguably one of the most crucial
challenges being faced by the area of AI these days. Although the majority of approaches to …

On tackling explanation redundancy in decision trees

Y Izza, A Ignatiev, J Marques-Silva - Journal of Artificial Intelligence …, 2022 - jair.org
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …

Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

[PDF][PDF] On tractable XAI queries based on compiled representations

G Audemard, F Koriche… - … Conference on Principles …, 2020 - univ-artois.hal.science
One of the key purposes of eXplainable AI (XAI) is to develop techniques for understanding
predictions made by Machine Learning (ML) models and for assessing how much reliable …

Local explanations via necessity and sufficiency: Unifying theory and practice

DS Watson, L Gultchin, A Taly… - Uncertainty in Artificial …, 2021 - proceedings.mlr.press
Necessity and sufficiency are the building blocks of all successful explanations. Yet despite
their importance, these notions have been conceptually underdeveloped and inconsistently …

From contrastive to abductive explanations and back again

A Ignatiev, N Narodytska, N Asher… - … Conference of the Italian …, 2020 - Springer
Abstract Explanations of Machine Learning (ML) models often address a question. Such
explanations can be related with selecting feature-value pairs which are sufficient for the …

[PDF][PDF] Axiomatic Foundations of Explainability.

L Amgoud, J Ben-Naim - IJCAI, 2022 - ijcai.org
Improving trust in decisions made by classification models is becoming crucial for the
acceptance of automated systems, and an important way of doing that is by providing …

Quantitative verification of neural networks and its security applications

T Baluta, S Shen, S Shinde, KS Meel… - Proceedings of the 2019 …, 2019 - dl.acm.org
Neural networks are increasingly employed in safety-critical domains. This has prompted
interest in verifying or certifying logically encoded properties of neural networks. Prior work …

Using MaxSAT for efficient explanations of tree ensembles

A Ignatiev, Y Izza, PJ Stuckey… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer
guarantees of interpretability, that represent a challenge from the perspective of explainable …