Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging
and trusting statistical and deep learning models, as well as interpreting their predictions …
and trusting statistical and deep learning models, as well as interpreting their predictions …
Explainable artificial intelligence for cybersecurity: a literature survey
With the extensive application of deep learning (DL) algorithms in recent years, eg, for
detecting Android malware or vulnerable source code, artificial intelligence (AI) and …
detecting Android malware or vulnerable source code, artificial intelligence (AI) and …
SoK: Explainable machine learning in adversarial environments
M Noppel, C Wressnegger - 2024 IEEE Symposium on Security …, 2024 - ieeexplore.ieee.org
Modern deep learning methods have long been considered black boxes due to the lack of
insights into their decision-making process. However, recent advances in explainable …
insights into their decision-making process. However, recent advances in explainable …
[HTML][HTML] When explainability turns into a threat-using xAI to fool a fake news detection method
The inclusion of Explainability of Artificial Intelligence (xAI) has become a mandatory
requirement for designing and implementing reliable, interpretable and ethical AI solutions …
requirement for designing and implementing reliable, interpretable and ethical AI solutions …
Fooling explanations in text classifiers
State-of-the-art text classification models are becoming increasingly reliant on deep neural
networks (DNNs). Due to their black-box nature, faithful and robust explanation methods …
networks (DNNs). Due to their black-box nature, faithful and robust explanation methods …
Adversarial attacks in explainable machine learning: A survey of threats against models and humans
Reliable deployment of machine learning models such as neural networks continues to be
challenging due to several limitations. Some of the main shortcomings are the lack of …
challenging due to several limitations. Some of the main shortcomings are the lack of …
Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack
LIME has emerged as one of the most commonly referenced tools in explainable AI (XAI)
frameworks that is integrated into critical machine learning applications--eg, healthcare and …
frameworks that is integrated into critical machine learning applications--eg, healthcare and …
Revisiting the robustness of post-hoc interpretability methods
Post-hoc interpretability methods play a critical role in explainable artificial intelligence (XAI),
as they pinpoint portions of data that a trained deep learning model deemed important to …
as they pinpoint portions of data that a trained deep learning model deemed important to …
Large language models and sentiment analysis in financial markets: A review, datasets and case study
This paper comprehensively examines Large Language Models (LLMs) in sentiment
analysis, specifically focusing on financial markets and exploring the correlation between …
analysis, specifically focusing on financial markets and exploring the correlation between …
Understanding and enhancing robustness of concept-based models
Rising usage of deep neural networks to perform decision making in critical applications like
medical diagnosis and fi-nancial analysis have raised concerns regarding their reliability …
medical diagnosis and fi-nancial analysis have raised concerns regarding their reliability …