Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Interpretable machine learning–a brief history, state-of-the-art and challenges
We present a brief history of the field of interpretable machine learning (IML), give an
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …
The road to explainability is paved with bias: Measuring the fairness of explanations
Machine learning models in safety-critical settings like healthcare are often “blackboxes”:
they contain a large number of parameters which are not transparent to users. Post-hoc …
they contain a large number of parameters which are not transparent to users. Post-hoc …
Mathematical optimization in classification and regression trees
Classification and regression trees, as well as their variants, are off-the-shelf methods in
Machine Learning. In this paper, we review recent contributions within the Continuous …
Machine Learning. In this paper, we review recent contributions within the Continuous …
B-LIME: An improvement of LIME for interpretable deep learning classification of cardiac arrhythmia from ECG signals
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …
On locality of local explanation models
Shapley values provide model agnostic feature attributions for model outcome at a particular
instance by simulating feature absence under a global population distribution. The use of a …
instance by simulating feature absence under a global population distribution. The use of a …
Sig-LIME: a signal-based enhancement of LIME explanation technique
Interpreting machine learning models is facilitated by the widely employed locally
interpretable model-agnostic explanation (LIME) technique. However, when extending LIME …
interpretable model-agnostic explanation (LIME) technique. However, when extending LIME …
Towards interpretable ANNs: An exact transformation to multi-class multivariate decision trees
On the one hand, artificial neural networks (ANNs) are commonly labelled as black-boxes,
lacking interpretability; an issue that hinders human understanding of ANNs' behaviors. A …
lacking interpretability; an issue that hinders human understanding of ANNs' behaviors. A …
[HTML][HTML] Local interpretation of deep learning models for Aspect-Based Sentiment Analysis
S Lam, Y Liu, M Broers, J van der Vos… - … Applications of Artificial …, 2025 - Elsevier
Currently, deep learning models are commonly used for Aspect-Based Sentiment Analysis
(ABSA). These deep learning models are often seen as black boxes, meaning that they are …
(ABSA). These deep learning models are often seen as black boxes, meaning that they are …
[HTML][HTML] NLS: An accurate and yet easy-to-interpret prediction method
Over the last years, the predictive power of supervised machine learning (ML) has
undergone impressive advances, achieving the status of state of the art and super-human …
undergone impressive advances, achieving the status of state of the art and super-human …
Model interpretation using improved local regression with variable importance
A fundamental question on the use of ML models concerns the explanation of their
predictions for increasing transparency in decision-making. Although several interpretability …
predictions for increasing transparency in decision-making. Although several interpretability …