Adversarial attacks and defenses in explainable artificial intelligence: A survey
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging
and trusting statistical and deep learning models, as well as interpreting their predictions …
and trusting statistical and deep learning models, as well as interpreting their predictions …
Explainable artificial intelligence for cybersecurity: a literature survey
With the extensive application of deep learning (DL) algorithms in recent years, eg, for
detecting Android malware or vulnerable source code, artificial intelligence (AI) and …
detecting Android malware or vulnerable source code, artificial intelligence (AI) and …
Diffusion visual counterfactual explanations
Abstract Visual Counterfactual Explanations (VCEs) are an important tool to understand the
decisions of an image classifier. They are “small” but “realistic” semantic changes of the …
decisions of an image classifier. They are “small” but “realistic” semantic changes of the …
Interpreting cis-regulatory mechanisms from genomic deep neural networks using surrogate models
Deep neural networks (DNNs) have greatly advanced the ability to predict genome function
from sequence. However, elucidating underlying biological mechanisms from genomic …
from sequence. However, elucidating underlying biological mechanisms from genomic …
Ss-cam: Smoothed score-cam for sharper visual feature localization
Interpretation of the underlying mechanisms of Deep Convolutional Neural Networks has
become an important aspect of research in the field of deep learning due to their …
become an important aspect of research in the field of deep learning due to their …
[HTML][HTML] Towards robust explanations for deep neural networks
Explanation methods shed light on the decision process of black-box classifiers such as
deep neural networks. But their usefulness can be compromised because they are …
deep neural networks. But their usefulness can be compromised because they are …
SoK: Explainable machine learning in adversarial environments
Modern deep learning methods have long been considered black boxes due to the lack of
insights into their decision-making process. However, recent advances in explainable …
insights into their decision-making process. However, recent advances in explainable …
Consistent counterfactuals for deep models
Counterfactual examples are one of the most commonly-cited methods for explaining the
predictions of machine learning models in key areas such as finance and medical diagnosis …
predictions of machine learning models in key areas such as finance and medical diagnosis …
On the robustness of removal-based feature attributions
To explain predictions made by complex machine learning models, many feature attribution
methods have been developed that assign importance scores to input features. Some recent …
methods have been developed that assign importance scores to input features. Some recent …
Sparse visual counterfactual explanations in image space
Visual counterfactual explanations (VCEs) in image space are an important tool to
understand decisions of image classifiers as they show under which changes of the image …
understand decisions of image classifiers as they show under which changes of the image …