Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation
The ability to interpret Machine Learning (ML) models is becoming increasingly essential.
However, despite significant progress in the field, there remains a lack of rigorous …
However, despite significant progress in the field, there remains a lack of rigorous …
[PDF][PDF] Bridging the Gap Between Black Box AI and Clinical Practice: Advancing Explainable AI for Trust, Ethics, and Personalized Healthcare Diagnostics
DA Tuan - 2024 - preprints.org
Explainable AI (XAI) has emerged as a pivotal tool in healthcare diagnostics, offering much-
needed transparency and interpretability in complex AI models. XAI techniques, such as …
needed transparency and interpretability in complex AI models. XAI techniques, such as …
Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient Reasons
S Bassan, S Gur, R Eliav - arxiv preprint arxiv:2502.03391, 2025 - arxiv.org
Minimal sufficient reasons represent a prevalent form of explanation-the smallest subset of
input features which, when held constant at their corresponding values, ensure that the …
input features which, when held constant at their corresponding values, ensure that the …
A Joint Learning Framework for Bridging Defect Prediction and Interpretation
G Xu, Z Zhu, X Guo, W Wang - arxiv preprint arxiv:2502.16429, 2025 - arxiv.org
Over the past fifty years, numerous software defect prediction (SDP) approaches have been
proposed. However, the ability to explain why predictors make certain predictions remains …
proposed. However, the ability to explain why predictors make certain predictions remains …
On the Complexity of Global Necessary Reasons to Explain Classification
Explainable AI has garnered considerable attention in recent years, as understanding the
reasons behind decisions or predictions made by AI systems is crucial for their successful …
reasons behind decisions or predictions made by AI systems is crucial for their successful …