Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
Delivering trustworthy AI through formal XAI
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …
On tackling explanation redundancy in decision trees
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …
The interpretability of decision trees motivates explainability approaches by so-called …
Logic-based explainability in machine learning
J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
On the failings of Shapley values for explainability
Abstract Explainable Artificial Intelligence (XAI) is widely considered to be critical for building
trust into the deployment of systems that integrate the use of machine learning (ML) models …
trust into the deployment of systems that integrate the use of machine learning (ML) models …
The inadequacy of shapley values for explainability
This paper develops a rigorous argument for why the use of Shapley values in explainable
AI (XAI) will necessarily yield provably misleading information about the relative importance …
AI (XAI) will necessarily yield provably misleading information about the relative importance …
Model interpretability through the lens of computational complexity
In spite of several claims stating that some models are more interpretable than others--eg,"
linear models are more interpretable than deep neural networks"--we still lack a principled …
linear models are more interpretable than deep neural networks"--we still lack a principled …
On computing probabilistic explanations for decision trees
Formal XAI (explainable AI) is a growing area that focuses on computing explanations with
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …
On explaining random forests with SAT
Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers.
Even though RFs are not interpretable, there are no dedicated non-heuristic approaches for …
Even though RFs are not interpretable, there are no dedicated non-heuristic approaches for …
Using MaxSAT for efficient explanations of tree ensembles
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer
guarantees of interpretability, that represent a challenge from the perspective of explainable …
guarantees of interpretability, that represent a challenge from the perspective of explainable …