Interpretability research of deep learning: A literature survey

B Xua, G Yang - Information Fusion, 2024 - Elsevier
Deep learning (DL) has been widely used in various fields. However, its black-box nature
limits people's understanding and trust in its decision-making process. Therefore, it becomes …

Interpretability of deep neural networks: A review of methods, classification and hardware

T Antamis, A Drosou, T Vafeiadis, A Nizamis… - Neurocomputing, 2024 - Elsevier
Artificial intelligence, and especially deep neural networks, have evolved substantially in the
recent years, infiltrating numerous domains of applications, often greatly impactful to …

[HTML][HTML] Fuzzy decision-making framework for explainable golden multi-machine learning models for real-time adversarial attack detection in Vehicular Ad-hoc …

AS Albahri, RA Hamid, AR Abdulnabi, OS Albahri… - Information …, 2024 - Elsevier
This paper addresses various issues in the literature concerning adversarial attack detection
in Vehicular Ad-hoc Networks (VANETs). These issues include the failure to consider both …

B-LIME: An improvement of LIME for interpretable deep learning classification of cardiac arrhythmia from ECG signals

TAA Abdullah, MSM Zahid, W Ali, SU Hassan - Processes, 2023 - mdpi.com
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …

[PDF][PDF] : A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models

X Li, M Du, J Chen, Y Chai… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract While Explainable Artificial Intelligence (XAI) techniques have been widely studied
to explain predictions made by deep neural networks, the way to evaluate the faithfulness of …

SLICE: Stabilized LIME for consistent explanations for image classification

RP Bora, P Terhörst, R Veldhuis… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Local Interpretable Model-agnostic Explanations (LIME)-a widely used post-ad-hoc
model agnostic explainable AI (XAI) technique. It works by training a simple transparent …

A posture-based measurement adjustment method for improving the accuracy of beef cattle body size measurement based on point cloud data

J Li, W Ma, Q Bai, D Tulpan, M Gong, Y Sun, X Xue… - Biosystems …, 2023 - Elsevier
Highlights•Body size automatic measurement based on beef cattle point clouds was
achieved.•Twelve micro-pose features were defined to describe beef cattle postures.•The …

BMB-LIME: LIME with modeling local nonlinearity and uncertainty in explainability

YH Hung, CY Lee - Knowledge-Based Systems, 2024 - Elsevier
The majority of eXplainable Artificial Intelligence (XAI) methods assume the local linearity of
the decision boundary, leading to significant errors when dealing with non-linear local …

Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning

E ŞAHiN, NN Arslan, D Özdemir - Neural Computing and Applications, 2024 - Springer
Deep learning models have revolutionized numerous fields, yet their decision-making
processes often remain opaque, earning them the characterization of “black-box” models …

US-LIME: Increasing fidelity in LIME using uncertainty sampling on tabular data

H Saadatfar, Z Kiani-Zadegan, B Ghahremani-Nezhad - Neurocomputing, 2024 - Elsevier
LIME has gained significant attention as an explainable artificial intelligence algorithm that
sheds light on how complex machine learning models make decisions within a specific …