Robust counterfactual explanations in machine learning: A survey

J Jiang, F Leofante, A Rago, F Toni - arxiv preprint arxiv:2402.01928, 2024 - arxiv.org
Counterfactual explanations (CEs) are advocated as being ideally suited to providing
algorithmic recourse for subjects affected by the predictions of machine learning models …

[PDF][PDF] Recourse under model multiplicity via argumentative ensembling

J Jiang, F Leofante, A Rago… - Proceedings of the 23rd …, 2024 - ifaamas.csc.liv.ac.uk
Model Multiplicity (MM), also known as predictive multiplicity or the Rashomon Effect, refers
to a scenario where multiple, equally performing machine learning (ML) models may be …

Contestable ai needs computational argumentation

F Leofante, H Ayoobi, A Dejl, G Freedman… - arxiv preprint arxiv …, 2024 - arxiv.org
AI has become pervasive in recent years, but state-of-the-art approaches predominantly
neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI …

Promoting counterfactual robustness through diversity

F Leofante, N Potyka - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Counterfactual explanations shed light on the decisions of black-box models by explaining
how an input can be altered to obtain a favourable decision from the model (eg, when a loan …

Provably robust and plausible counterfactual explanations for neural networks via robust optimisation

J Jiang, J Lan, F Leofante, A Rago… - Asian Conference on …, 2024 - proceedings.mlr.press
Abstract Counterfactual Explanations (CEs) have received increasing interest as a major
methodology for explaining neural network classifiers. Usually, CEs for an input-output pair …

Rigorous probabilistic guarantees for robust counterfactual explanations

L Marzari, F Leofante, F Cicalese, A Farinelli - arxiv preprint arxiv …, 2024 - arxiv.org
We study the problem of assessing the robustness of counterfactual explanations for deep
learning models. We focus on $\textit {plausible model shifts} $ altering model parameters …

Robust explanations for human-neural multi-agent systems with formal verification

F Leofante, A Lomuscio - European Conference on Multi-Agent Systems, 2023 - Springer
The quality of explanations in human-agent interactions is fundamental to the development
of trustworthy AI systems. In this paper we study the problem of generating robust contrastive …

[HTML][HTML] Interval abstractions for robust counterfactual explanations

J Jiang, F Leofante, A Rago, F Toni - Artificial Intelligence, 2024 - Elsevier
Abstract Counterfactual Explanations (CEs) have emerged as a major paradigm in
explainable AI research, providing recourse recommendations for users affected by the …

The Curious Case of Arbitrariness in Machine Learning

P Ganesh, A Taik, G Farnadi - arxiv preprint arxiv:2501.14959, 2025 - arxiv.org
Algorithmic modelling relies on limited information in data to extrapolate outcomes for
unseen scenarios, often embedding an element of arbitrariness in its decisions. A …

RobustX: Robust Counterfactual Explanations Made Easy

J Jiang, L Marzari, A Purohit, F Leofante - arxiv preprint arxiv:2502.13751, 2025 - arxiv.org
The increasing use of Machine Learning (ML) models to aid decision-making in high-stakes
industries demands explainability to facilitate trust. Counterfactual Explanations (CEs) are …