Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Robust counterfactual explanations in machine learning: A survey
Counterfactual explanations (CEs) are advocated as being ideally suited to providing
algorithmic recourse for subjects affected by the predictions of machine learning models …
algorithmic recourse for subjects affected by the predictions of machine learning models …
[PDF][PDF] Recourse under model multiplicity via argumentative ensembling
Model Multiplicity (MM), also known as predictive multiplicity or the Rashomon Effect, refers
to a scenario where multiple, equally performing machine learning (ML) models may be …
to a scenario where multiple, equally performing machine learning (ML) models may be …
Contestable ai needs computational argumentation
AI has become pervasive in recent years, but state-of-the-art approaches predominantly
neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI …
neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI …
Promoting counterfactual robustness through diversity
Counterfactual explanations shed light on the decisions of black-box models by explaining
how an input can be altered to obtain a favourable decision from the model (eg, when a loan …
how an input can be altered to obtain a favourable decision from the model (eg, when a loan …
Provably robust and plausible counterfactual explanations for neural networks via robust optimisation
Abstract Counterfactual Explanations (CEs) have received increasing interest as a major
methodology for explaining neural network classifiers. Usually, CEs for an input-output pair …
methodology for explaining neural network classifiers. Usually, CEs for an input-output pair …
Rigorous probabilistic guarantees for robust counterfactual explanations
We study the problem of assessing the robustness of counterfactual explanations for deep
learning models. We focus on $\textit {plausible model shifts} $ altering model parameters …
learning models. We focus on $\textit {plausible model shifts} $ altering model parameters …
Robust explanations for human-neural multi-agent systems with formal verification
The quality of explanations in human-agent interactions is fundamental to the development
of trustworthy AI systems. In this paper we study the problem of generating robust contrastive …
of trustworthy AI systems. In this paper we study the problem of generating robust contrastive …
[HTML][HTML] Interval abstractions for robust counterfactual explanations
Abstract Counterfactual Explanations (CEs) have emerged as a major paradigm in
explainable AI research, providing recourse recommendations for users affected by the …
explainable AI research, providing recourse recommendations for users affected by the …
The Curious Case of Arbitrariness in Machine Learning
Algorithmic modelling relies on limited information in data to extrapolate outcomes for
unseen scenarios, often embedding an element of arbitrariness in its decisions. A …
unseen scenarios, often embedding an element of arbitrariness in its decisions. A …
RobustX: Robust Counterfactual Explanations Made Easy
The increasing use of Machine Learning (ML) models to aid decision-making in high-stakes
industries demands explainability to facilitate trust. Counterfactual Explanations (CEs) are …
industries demands explainability to facilitate trust. Counterfactual Explanations (CEs) are …