Protovae: A trustworthy self-explainable prototypical variational model

S Gautam, A Boubekki, S Hansen… - Advances in …, 2022 - proceedings.neurips.cc
The need for interpretable models has fostered the development of self-explainable
classifiers. Prior approaches are either based on multi-stage optimization schemes …

DORA: Exploring outlier representations in deep neural networks

K Bykov, M Deb, D Grinwald, KR Müller… - arxiv preprint arxiv …, 2022 - arxiv.org
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal
representations. However, the concepts they learn remain opaque, a problem that becomes …

[HTML][HTML] A clinically motivated self-supervised approach for content-based image retrieval of CT liver images

KK Wickstrøm, EA Østmo, K Radiya… - … Medical Imaging and …, 2023 - Elsevier
Deep learning-based approaches for content-based image retrieval (CBIR) of computed
tomography (CT) liver images is an active field of research, but suffer from some critical …

[HTML][HTML] Analysis of the Clever Hans effect in COVID-19 detection using Chest X-Ray images and Bayesian Deep Learning

JD Arias-Londoño, JI Godino-Llorente - Biomedical Signal Processing and …, 2024 - Elsevier
In recent months, the detection of COVID-19 from radiological images has become a topic of
significant interest. Several works have proposed different AI models to demonstrate the …

Finding Spurious Correlations with Function-Semantic Contrast Analysis

K Bykov, L Kopf, MMC Höhne - World Conference on Explainable Artificial …, 2023 - Springer
In the field of Computer Vision (CV), the degree to which two objects, eg two classes, share
a common conceptual meaning, known as semantic similarity, is closely linked to the visual …

MelSPPNET—A self-explainable recognition model for emerald ash borer vibrational signals

W Jiang, Z Chen, H Zhang, J Li - Frontiers in Forests and Global …, 2024 - frontiersin.org
Introduction This study aims to achieve early and reliable monitoring of wood-boring pests,
which are often highly concealed, have long lag times, and cause significant damage to …

Towards Transparent AI for Neurological Disorders: A Feature Extraction and Relevance Analysis Framework

MD Woodbright, A Morshed, M Browne, B Ray… - IEEE …, 2024 - ieeexplore.ieee.org
The lack of interpretability and transparency in deep learning architectures has raised
concerns among professionals in various industries and academia. One of the main …

Explaining the Impact of Training on Vision Models via Activation Clustering

A Boubekki, SG Fadel, S Mair - arxiv preprint arxiv:2411.19700, 2024 - arxiv.org
Recent developments in the field of explainable artificial intelligence (XAI) for vision models
investigate the information extracted by their feature encoder. We contribute to this effort and …

Leveraging supervoxels for medical image volume segmentation with limited supervision

S Hansen - 2022 - munin.uit.no
The majority of existing methods for machine learning-based medical image segmentation
are supervised models that require large amounts of fully annotated images. These types of …

On the Interpretability and Explainability of Prototype-Based Methods and Reinforcement Learning

SO Davoudi - 2024 - repository.library.carleton.ca
With the ever-growing use of AI to solve real-world problems, the need for transparency and
trust in these methods has given rise to Interpretable and Explainable AI. While a growing …