Protovae: A trustworthy self-explainable prototypical variational model
The need for interpretable models has fostered the development of self-explainable
classifiers. Prior approaches are either based on multi-stage optimization schemes …
classifiers. Prior approaches are either based on multi-stage optimization schemes …
DORA: Exploring outlier representations in deep neural networks
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal
representations. However, the concepts they learn remain opaque, a problem that becomes …
representations. However, the concepts they learn remain opaque, a problem that becomes …
[HTML][HTML] A clinically motivated self-supervised approach for content-based image retrieval of CT liver images
Deep learning-based approaches for content-based image retrieval (CBIR) of computed
tomography (CT) liver images is an active field of research, but suffer from some critical …
tomography (CT) liver images is an active field of research, but suffer from some critical …
[HTML][HTML] Analysis of the Clever Hans effect in COVID-19 detection using Chest X-Ray images and Bayesian Deep Learning
In recent months, the detection of COVID-19 from radiological images has become a topic of
significant interest. Several works have proposed different AI models to demonstrate the …
significant interest. Several works have proposed different AI models to demonstrate the …
Finding Spurious Correlations with Function-Semantic Contrast Analysis
In the field of Computer Vision (CV), the degree to which two objects, eg two classes, share
a common conceptual meaning, known as semantic similarity, is closely linked to the visual …
a common conceptual meaning, known as semantic similarity, is closely linked to the visual …
MelSPPNET—A self-explainable recognition model for emerald ash borer vibrational signals
W Jiang, Z Chen, H Zhang, J Li - Frontiers in Forests and Global …, 2024 - frontiersin.org
Introduction This study aims to achieve early and reliable monitoring of wood-boring pests,
which are often highly concealed, have long lag times, and cause significant damage to …
which are often highly concealed, have long lag times, and cause significant damage to …
Towards Transparent AI for Neurological Disorders: A Feature Extraction and Relevance Analysis Framework
The lack of interpretability and transparency in deep learning architectures has raised
concerns among professionals in various industries and academia. One of the main …
concerns among professionals in various industries and academia. One of the main …
Explaining the Impact of Training on Vision Models via Activation Clustering
Recent developments in the field of explainable artificial intelligence (XAI) for vision models
investigate the information extracted by their feature encoder. We contribute to this effort and …
investigate the information extracted by their feature encoder. We contribute to this effort and …
Leveraging supervoxels for medical image volume segmentation with limited supervision
S Hansen - 2022 - munin.uit.no
The majority of existing methods for machine learning-based medical image segmentation
are supervised models that require large amounts of fully annotated images. These types of …
are supervised models that require large amounts of fully annotated images. These types of …
On the Interpretability and Explainability of Prototype-Based Methods and Reinforcement Learning
SO Davoudi - 2024 - repository.library.carleton.ca
With the ever-growing use of AI to solve real-world problems, the need for transparency and
trust in these methods has given rise to Interpretable and Explainable AI. While a growing …
trust in these methods has given rise to Interpretable and Explainable AI. While a growing …