Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Gradient based feature attribution in explainable ai: A technical review
The surge in black-box AI models has prompted the need to explain the internal mechanism
and justify their reliability, especially in high-stakes applications, such as healthcare and …
and justify their reliability, especially in high-stakes applications, such as healthcare and …
Benchmarking and survey of explanation methods for black box models
The rise of sophisticated black-box machine learning models in Artificial Intelligence
systems has prompted the need for explanation methods that reveal how these models work …
systems has prompted the need for explanation methods that reveal how these models work …
Attcat: Explaining transformers via attentive class activation tokens
Transformers have improved the state-of-the-art in various natural language processing and
computer vision tasks. However, the success of the Transformer model has not yet been duly …
computer vision tasks. However, the success of the Transformer model has not yet been duly …
IDGI: A framework to eliminate explanation noise from integrated gradients
Integrated Gradients (IG) as well as its variants are well-known techniques for interpreting
the decisions of deep neural networks. While IG-based approaches attain state-of-the-art …
the decisions of deep neural networks. While IG-based approaches attain state-of-the-art …
Fast axiomatic attribution for neural networks
Mitigating the dependence on spurious correlations present in the training dataset is a
quickly emerging and important topic of deep learning. Recent approaches include priors on …
quickly emerging and important topic of deep learning. Recent approaches include priors on …
MFABA: a more faithful and accelerated boundary-based attribution method for deep neural networks
To better understand the output of deep neural networks (DNN), attribution based methods
have been an important approach for model interpretability, which assign a score for each …
have been an important approach for model interpretability, which assign a score for each …
Local path integration for attribution
Path attribution methods are a popular tool to interpret a visual model's prediction on an
input. They integrate model gradients for the input features over a path defined between the …
input. They integrate model gradients for the input features over a path defined between the …
Interpretability-aware vision transformer
Vision Transformers (ViTs) have become prominent models for solving various vision tasks.
However, the interpretability of ViTs has not kept pace with their promising performance …
However, the interpretability of ViTs has not kept pace with their promising performance …
Improving adversarial transferability via frequency-based stationary point search
Deep neural networks (DNNs) have been shown vulnerable to interference from adversarial
samples, leading to erroneous predictions. Investigating adversarial attacks can effectively …
samples, leading to erroneous predictions. Investigating adversarial attacks can effectively …
Towards credible visual model interpretation with path attribution
With its inspirational roots in game-theory, path attribution framework stands out among the
post-hoc model interpretation techniques due to its axiomatic nature. However, recent …
post-hoc model interpretation techniques due to its axiomatic nature. However, recent …