Interpretable machine learning–a brief history, state-of-the-art and challenges

C Molnar, G Casalicchio, B Bischl - Joint European conference on …, 2020 - Springer
We present a brief history of the field of interpretable machine learning (IML), give an
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …

The road to explainability is paved with bias: Measuring the fairness of explanations

A Balagopalan, H Zhang, K Hamidieh… - Proceedings of the …, 2022 - dl.acm.org
Machine learning models in safety-critical settings like healthcare are often “blackboxes”:
they contain a large number of parameters which are not transparent to users. Post-hoc …

Mathematical optimization in classification and regression trees

E Carrizosa, C Molero-Río, D Romero Morales - Top, 2021 - Springer
Classification and regression trees, as well as their variants, are off-the-shelf methods in
Machine Learning. In this paper, we review recent contributions within the Continuous …

B-LIME: An improvement of LIME for interpretable deep learning classification of cardiac arrhythmia from ECG signals

TAA Abdullah, MSM Zahid, W Ali, SU Hassan - Processes, 2023 - mdpi.com
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …

On locality of local explanation models

S Ghalebikesabi, L Ter-Minassian… - Advances in neural …, 2021 - proceedings.neurips.cc
Shapley values provide model agnostic feature attributions for model outcome at a particular
instance by simulating feature absence under a global population distribution. The use of a …

Sig-LIME: a signal-based enhancement of LIME explanation technique

TAA Abdullah, MSM Zahid, AF Turki, W Ali… - IEEE …, 2024 - ieeexplore.ieee.org
Interpreting machine learning models is facilitated by the widely employed locally
interpretable model-agnostic explanation (LIME) technique. However, when extending LIME …

Towards interpretable ANNs: An exact transformation to multi-class multivariate decision trees

DT Nguyen, KE Kasmarik, HA Abbass - arxiv preprint arxiv:2003.04675, 2020 - arxiv.org
On the one hand, artificial neural networks (ANNs) are commonly labelled as black-boxes,
lacking interpretability; an issue that hinders human understanding of ANNs' behaviors. A …

[HTML][HTML] Local interpretation of deep learning models for Aspect-Based Sentiment Analysis

S Lam, Y Liu, M Broers, J van der Vos… - … Applications of Artificial …, 2025 - Elsevier
Currently, deep learning models are commonly used for Aspect-Based Sentiment Analysis
(ABSA). These deep learning models are often seen as black boxes, meaning that they are …

[HTML][HTML] NLS: An accurate and yet easy-to-interpret prediction method

V Coscrato, MHA Inácio, T Botari, R Izbicki - Neural Networks, 2023 - Elsevier
Over the last years, the predictive power of supervised machine learning (ML) has
undergone impressive advances, achieving the status of state of the art and super-human …

Model interpretation using improved local regression with variable importance

GY Shimizu, R Izbicki, AC de Carvalho - arxiv preprint arxiv:2209.05371, 2022 - arxiv.org
A fundamental question on the use of ML models concerns the explanation of their
predictions for increasing transparency in decision-making. Although several interpretability …