Explain any concept: Segment anything meets concept-based explanation

A Sun, P Ma, Y Yuan, S Wang - Advances in Neural …, 2024 - proceedings.neurips.cc
EXplainable AI (XAI) is an essential topic to improve human understanding of deep neural
networks (DNNs) given their black-box internals. For computer vision tasks, mainstream …

On the coherency of quantitative evaluation of visual explanations

B Vandersmissen, J Oramas - Computer Vision and Image Understanding, 2024 - Elsevier
Recent years have shown an increased development of methods for justifying the
predictions of neural networks through visual explanations. These explanations usually take …

[HTML][HTML] An interpretable decision-support model for breast cancer diagnosis using histopathology images

S Krishna, SS Suganthi, A Bhavsar… - Journal of Pathology …, 2023 - Elsevier
Microscopic examination of biopsy tissue slides is perceived as the gold-standard
methodology for the confirmation of presence of cancer cells. Manual analysis of an …

[HTML][HTML] Where is my attention? An explainable AI exploration in water detection from SAR imagery

L Chen, X Cai, Z Li, J **ng, J Ai - … Journal of Applied Earth Observation and …, 2024 - Elsevier
Attention mechanisms have found extensive application in Deep Neural Networks (DNNs),
with numerous experiments over time showcasing their efficacy in improving the overall …

Deep spatial context: when attention-based models meet spatial regression

P Tomaszewska, E Sienkiewicz, MP Hoang… - arxiv preprint arxiv …, 2024 - arxiv.org
We propose'Deep spatial context'(DSCon) method, which serves for investigation of the
attention-based vision models using the concept of spatial context. It was inspired by …

T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers

MV Ntrougkas, N Gkalelis, V Mezaris - IEEE Access, 2024 - ieeexplore.ieee.org
The development and adoption of Vision Transformers and other deep-learning
architectures for image classification tasks has been rapid. However, the “black box” nature …

P-TAME: Explain Any Image Classifier with Trained Perturbations

MV Ntrougkas, V Mezaris, I Patras - arxiv preprint arxiv:2501.17813, 2025 - arxiv.org
The adoption of Deep Neural Networks (DNNs) in critical fields where predictions need to be
accompanied by justifications is hindered by their inherent black-box nature. In this paper …

Explainable Video Summarization for Advancing Media Content Production

E Apostolidis, G Balaouras, I Patras… - … of Information Science …, 2025 - igi-global.com
This chapter focuses on explainable video summarization, a technology that could
significantly advance the content production workflow of Media organizations. It starts by …

[PDF][PDF] Sentinel-2 MSI data for active fire detection in major fire-prone biomes: A multi-criteria approach

X Hu, Y Ban, A Nascetti - International Journal of Applied Earth …, 2021 - lirias.kuleuven.be
ABSTRACT Sentinel-2 MultiSpectral Instrument (MSI) data exhibits the great potential of
enhanced spatial and temporal coverage for monitoring biomass burning which could …

A Study on the Use of Attention for Explaining Video Summarization

E Apostolidis, V Mezaris, I Patras - Proceedings of the 2nd Workshop on …, 2023 - dl.acm.org
In this paper we present our study on the use of attention for explaining video
summarization. We build on a recent work that formulates the task, called XAI-SUM, and we …