Towards Multi-dimensional Explanation Alignment for Medical Classification

L Hu, S Lai, W Chen, H **ao, H Lin… - Advances in …, 2025 - proceedings.neurips.cc
The lack of interpretability in the field of medical image analysis has significant ethical and
legal implications. Existing interpretable methods in this domain encounter several …

Survival Concept-Based Learning Models

SR Kirpichenko, LV Utkin, AV Konstantinov… - arxiv preprint arxiv …, 2025 - arxiv.org
Concept-based learning enhances prediction accuracy and interpretability by leveraging
high-level, human-understandable concepts. However, existing CBL frameworks do not …

Improving Concept Alignment in Vision-Language Concept Bottleneck Models

NM Selvaraj, X Guo, AWK Kong, A Kot - arxiv preprint arxiv:2405.01825, 2024 - arxiv.org
Concept Bottleneck Models (CBM) map images to human-interpretable concepts before
making class predictions. Recent approaches automate CBM construction by prompting …