Counterfactual explanation trees: Transparent and consistent actionable recourse with decision trees

K Kanamori, T Takagi… - … Conference on Artificial …, 2022 - proceedings.mlr.press
Counterfactual Explanation (CE) is a post-hoc explanation method that provides a
perturbation for altering the prediction result of a classifier. An individual can interpret the …

Tackling the XAI disagreement problem with regional explanations

G Laberge, YB Pequignot… - International …, 2024 - proceedings.mlr.press
Abstract The XAI Disagreement Problem concerns the fact that various explainability
methods yield different local/global insights on model behavior. Thus, given the lack of …

Hybrid predictive models: When an interpretable model collaborates with a black-box model

T Wang, Q Lin - Journal of Machine Learning Research, 2021 - jmlr.org
Interpretable machine learning has become a strong competitor for black-box models.
However, the possible loss of the predictive performance for gaining understandability is …

On the intersection of explainable and reliable AI for physical fatigue prediction

S Narteni, V Orani, E Cambiaso, M Rucco… - IEEE …, 2022 - ieeexplore.ieee.org
In the era of Industry 4.0, the use of Artificial Intelligence (AI) is widespread in occupational
settings. Since dealing with human safety, explainability and trustworthiness of AI are even …

Partially interpretable models with guarantees on coverage and accuracy

N Frost, Z Lipton, Y Mansour… - … on algorithmic learning …, 2024 - proceedings.mlr.press
Simple, sufficient explanations furnished by short decision lists can be useful for guiding
stakeholder actions. Unfortunately, this transparency can come at the expense of the higher …

Learning hybrid interpretable models: Theory, taxonomy, and methods

J Ferry, G Laberge, U Aïvodji - arxiv preprint arxiv:2303.04437, 2023 - arxiv.org
A hybrid model involves the cooperation of an interpretable model and a complex black box.
At inference, any input of the hybrid model is assigned to either its interpretable or complex …

Causal rule sets for identifying subgroups with enhanced treatment effects

T Wang, C Rudin - INFORMS journal on computing, 2022 - pubsonline.informs.org
A key question in causal inference analyses is how to find subgroups with elevated
treatment effects. This paper takes a machine learning approach and introduces a …

Learning performance maximizing ensembles with explainability guarantees

V Pisztora, J Li - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
In this paper we propose a method for the optimal allocation of observations between an
intrinsically explainable glass box model and a black box model. An optimal allocation being …

Addressing interpretability fairness & privacy in machine learning through combinatorial optimization methods

J Ferry - 2023 - theses.hal.science
Machine learning techniques are increasingly used for high-stakes decision making, such
as college admissions, loan attribution or recidivism prediction. It is thus crucial to ensure …

Causal rule sets for identifying subgroups with enhanced treatment effect

T Wang, C Rudin - arxiv preprint arxiv:1710.05426, 2017 - arxiv.org
A key question in causal inference analyses is how to find subgroups with elevated
treatment effects. This paper takes a machine learning approach and introduces a …