Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Interpretability research of deep learning: A literature survey
B Xua, G Yang - Information Fusion, 2024 - Elsevier
Deep learning (DL) has been widely used in various fields. However, its black-box nature
limits people's understanding and trust in its decision-making process. Therefore, it becomes …
limits people's understanding and trust in its decision-making process. Therefore, it becomes …
Interpretability of deep neural networks: A review of methods, classification and hardware
Artificial intelligence, and especially deep neural networks, have evolved substantially in the
recent years, infiltrating numerous domains of applications, often greatly impactful to …
recent years, infiltrating numerous domains of applications, often greatly impactful to …
[HTML][HTML] Fuzzy decision-making framework for explainable golden multi-machine learning models for real-time adversarial attack detection in Vehicular Ad-hoc …
This paper addresses various issues in the literature concerning adversarial attack detection
in Vehicular Ad-hoc Networks (VANETs). These issues include the failure to consider both …
in Vehicular Ad-hoc Networks (VANETs). These issues include the failure to consider both …
B-LIME: An improvement of LIME for interpretable deep learning classification of cardiac arrhythmia from ECG signals
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …
[PDF][PDF] : A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models
Abstract While Explainable Artificial Intelligence (XAI) techniques have been widely studied
to explain predictions made by deep neural networks, the way to evaluate the faithfulness of …
to explain predictions made by deep neural networks, the way to evaluate the faithfulness of …
SLICE: Stabilized LIME for consistent explanations for image classification
Abstract Local Interpretable Model-agnostic Explanations (LIME)-a widely used post-ad-hoc
model agnostic explainable AI (XAI) technique. It works by training a simple transparent …
model agnostic explainable AI (XAI) technique. It works by training a simple transparent …
A posture-based measurement adjustment method for improving the accuracy of beef cattle body size measurement based on point cloud data
Highlights•Body size automatic measurement based on beef cattle point clouds was
achieved.•Twelve micro-pose features were defined to describe beef cattle postures.•The …
achieved.•Twelve micro-pose features were defined to describe beef cattle postures.•The …
BMB-LIME: LIME with modeling local nonlinearity and uncertainty in explainability
The majority of eXplainable Artificial Intelligence (XAI) methods assume the local linearity of
the decision boundary, leading to significant errors when dealing with non-linear local …
the decision boundary, leading to significant errors when dealing with non-linear local …
Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning
Deep learning models have revolutionized numerous fields, yet their decision-making
processes often remain opaque, earning them the characterization of “black-box” models …
processes often remain opaque, earning them the characterization of “black-box” models …
US-LIME: Increasing fidelity in LIME using uncertainty sampling on tabular data
LIME has gained significant attention as an explainable artificial intelligence algorithm that
sheds light on how complex machine learning models make decisions within a specific …
sheds light on how complex machine learning models make decisions within a specific …