Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Towards human-centered explainable ai: A survey of user studies for model explanations
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …
better understanding of the needs of XAI users, as well as human-centered evaluations of …
Multiviz: Towards visualizing and understanding multimodal models
The promise of multimodal models for real-world applications has inspired research in
visualizing and understanding their internal mechanics with the end goal of empowering …
visualizing and understanding their internal mechanics with the end goal of empowering …
Human interpretation of saliency-based explanation over text
While a lot of research in explainable AI focuses on producing effective explanations, less
work is devoted to the question of how people understand and interpret the explanation. In …
work is devoted to the question of how people understand and interpret the explanation. In …
Recent Developments on Accountability and Explainability for Complex Reasoning Tasks
P Atanasova - Accountable and Explainable Methods for Complex …, 2024 - Springer
This chapter delves into the recent accountability tools tailored for the evolving landscape of
machine learning models for complex reasoning tasks. With the increasing integration of …
machine learning models for complex reasoning tasks. With the increasing integration of …
Learning to scaffold: Optimizing model explanations for teaching
Modern machine learning models are opaque, and as a result there is a burgeoning
academic subfield on methods that explain these models' behavior. However, what is the …
academic subfield on methods that explain these models' behavior. However, what is the …
Saliency map verbalization: Comparing feature importance representations from model-free and instruction-based methods
Saliency maps can explain a neural model's predictions by identifying important input
features. They are difficult to interpret for laypeople, especially for instances with many …
features. They are difficult to interpret for laypeople, especially for instances with many …
Mediators: Conversational agents explaining nlp model behavior
The human-centric explainable artificial intelligence (HCXAI) community has raised the
need for framing the explanation process as a conversation between human and machine …
need for framing the explanation process as a conversation between human and machine …
Silent vulnerable dependency alert prediction with vulnerability key aspect explanation
Due to convenience, open-source software is widely used. For beneficial reasons, open-
source maintainers often fix the vulnerabilities silently, exposing their users unaware of the …
source maintainers often fix the vulnerabilities silently, exposing their users unaware of the …
Explaining speech classification models via word-level audio segments and paralinguistic features
Recent advances in eXplainable AI (XAI) have provided new insights into how models for
vision, language, and tabular data operate. However, few approaches exist for …
vision, language, and tabular data operate. However, few approaches exist for …
Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary
Recent advances in AI models have increased the integration of AI-based decision aids into
the human decision making process. To fully unlock the potential of AI-assisted decision …
the human decision making process. To fully unlock the potential of AI-assisted decision …