Evaluating and aggregating feature-based model explanations
A feature-based model explanation denotes how much each input feature contributes to a
model's output for a given data point. As the number of proposed explanation functions …
model's output for a given data point. As the number of proposed explanation functions …
[PDF][PDF] Looking inside the black-box: Logic-based explanations for neural networks
J Ferreira, M de Sousa Ribeiro… - Proceedings of the …, 2022 - userweb.fct.unl.pt
Deep neural network-based methods have recently enjoyed great popularity due to their
effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into …
effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into …
Local path integration for attribution
Path attribution methods are a popular tool to interpret a visual model's prediction on an
input. They integrate model gradients for the input features over a path defined between the …
input. They integrate model gradients for the input features over a path defined between the …
Robust models are more interpretable because attributions look normal
Recent work has found that adversarially-robust deep networks used for image classification
are more interpretable: their feature attributions tend to be sharper, and are more …
are more interpretable: their feature attributions tend to be sharper, and are more …
What makes a good explanation?: A harmonized view of properties of explanations
Interpretability provides a means for humans to verify aspects of machine learning (ML)
models and empower human+ ML teaming in situations where the task cannot be fully …
models and empower human+ ML teaming in situations where the task cannot be fully …
Machine learning explainability and robustness: connected at the hip
This tutorial examines the synergistic relationship between explainability methods for
machine learning and a significant problem related to model quality: robustness against …
machine learning and a significant problem related to model quality: robustness against …
On the evaluation of deep learning interpretability methods for medical images under the scope of faithfulness
Abstract Background and Objective: Evaluating the interpretability of Deep Learning models
is crucial for building trust and gaining insights into their decision-making processes. In this …
is crucial for building trust and gaining insights into their decision-making processes. In this …
What is the optimal attribution method for explainable ophthalmic disease classification?
Deep learning methods for ophthalmic diagnosis have shown success for tasks like
segmentation and classification but their implementation in the clinical setting is limited by …
segmentation and classification but their implementation in the clinical setting is limited by …
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
The evaluation of explainable artificial intelligence is challenging, because automated and
human-centred metrics of explanation quality may diverge. To clarify their relationship, we …
human-centred metrics of explanation quality may diverge. To clarify their relationship, we …
Training Robust ML-based Raw-Binary Malware Detectors in Hours, not Months
Machine-learning (ML) classifiers are increasingly used to distinguish malware from benign
binaries. Recent work has shown that ML-based detectors can be evaded by adversarial …
binaries. Recent work has shown that ML-based detectors can be evaded by adversarial …