Delivering trustworthy AI through formal XAI
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …
Towards trustable explainable AI
A Ignatiev - … Joint Conference on Artificial Intelligence-Pacific …, 2020 - research.monash.edu
Explainable artificial intelligence (XAI) represents arguably one of the most crucial
challenges being faced by the area of AI these days. Although the majority of approaches to …
challenges being faced by the area of AI these days. Although the majority of approaches to …
On tackling explanation redundancy in decision trees
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …
The interpretability of decision trees motivates explainability approaches by so-called …
Logic-based explainability in machine learning
J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
[PDF][PDF] On tractable XAI queries based on compiled representations
One of the key purposes of eXplainable AI (XAI) is to develop techniques for understanding
predictions made by Machine Learning (ML) models and for assessing how much reliable …
predictions made by Machine Learning (ML) models and for assessing how much reliable …
Local explanations via necessity and sufficiency: Unifying theory and practice
Necessity and sufficiency are the building blocks of all successful explanations. Yet despite
their importance, these notions have been conceptually underdeveloped and inconsistently …
their importance, these notions have been conceptually underdeveloped and inconsistently …
From contrastive to abductive explanations and back again
Abstract Explanations of Machine Learning (ML) models often address a question. Such
explanations can be related with selecting feature-value pairs which are sufficient for the …
explanations can be related with selecting feature-value pairs which are sufficient for the …
[PDF][PDF] Axiomatic Foundations of Explainability.
Improving trust in decisions made by classification models is becoming crucial for the
acceptance of automated systems, and an important way of doing that is by providing …
acceptance of automated systems, and an important way of doing that is by providing …
Quantitative verification of neural networks and its security applications
Neural networks are increasingly employed in safety-critical domains. This has prompted
interest in verifying or certifying logically encoded properties of neural networks. Prior work …
interest in verifying or certifying logically encoded properties of neural networks. Prior work …
Using MaxSAT for efficient explanations of tree ensembles
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer
guarantees of interpretability, that represent a challenge from the perspective of explainable …
guarantees of interpretability, that represent a challenge from the perspective of explainable …