Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?

JH Lee, G Mikriukov, G Schwalbe, S Wermter… - arxiv preprint arxiv …, 2024 - arxiv.org
Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising
field of research, since explanations that refer to concepts (ie, semantically meaningful parts …

On the foundations of shortcut learning

KL Hermann, H Mobahi, T Fel, MC Mozer - arxiv preprint arxiv …, 2023 - arxiv.org
Deep-learning models can extract a rich assortment of features from data. Which features a
model uses depends not only on predictivity-how reliably a feature indicates train-set labels …

Understanding Video Transformers via Universal Concept Discovery

M Kowal, A Dave, R Ambrus… - Proceedings of the …, 2024 - openaccess.thecvf.com
This paper studies the problem of concept-based interpretability of transformer
representations for videos. Concretely we seek to explain the decision-making process of …

Interpreting clip with sparse linear concept embeddings (splice)

U Bhalla, A Oesterling, S Srinivas, FP Calmon… - arxiv preprint arxiv …, 2024 - arxiv.org
CLIP embeddings have demonstrated remarkable performance across a wide range of
computer vision tasks. However, these high-dimensional, dense vector representations are …

Interpretability is in the mind of the beholder: A causal framework for human-interpretable representation learning

E Marconato, A Passerini, S Teso - Entropy, 2023 - mdpi.com
Research on Explainable Artificial Intelligence has recently started exploring the idea of
producing explanations that, rather than being expressed in terms of low-level features, are …

Pruning by explaining revisited: Optimizing attribution methods to prune cnns and transformers

SMV Hatefi, M Dreyer, R Achtibat, T Wiegand… - arxiv preprint arxiv …, 2024 - arxiv.org
To solve ever more complex problems, Deep Neural Networks are scaled to billions of
parameters, leading to huge computational costs. An effective approach to reduce …