Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Reframing human-AI collaboration for generating free-text explanations
Large language models are increasingly capable of generating fluent-appearing text with
relatively little task-specific supervision. But can these models accurately explain …
relatively little task-specific supervision. But can these models accurately explain …
Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems
Explainable artificially intelligent (XAI) systems form part of sociotechnical systems, eg,
human+ AI teams tasked with making decisions. Yet, current XAI systems are rarely …
human+ AI teams tasked with making decisions. Yet, current XAI systems are rarely …
IEEE P7001: A proposed standard on transparency
This paper describes IEEE P7001, a new draft standard on transparency of autonomous
systems. In the paper, we outline the development and structure of the draft standard. We …
systems. In the paper, we outline the development and structure of the draft standard. We …
Advancing explainable autonomous vehicle systems: A comprehensive review and research roadmap
Given the uncertainty surrounding how existing explainability methods for autonomous
vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is …
vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is …
A survey of explainable AI terminology
Abstract The field of Explainable Artificial Intelligence attempts to solve the problem of
algorithmic opacity. Many terms and notions have been introduced recently to define …
algorithmic opacity. Many terms and notions have been introduced recently to define …
A study of automatic metrics for the evaluation of natural language explanations
As transparency becomes key for robotics and AI, it will be necessary to evaluate the
methods through which transparency is provided, including automatically generated natural …
methods through which transparency is provided, including automatically generated natural …
Transparency in hri: Trust and decision making in the face of robot errors
Robots are rapidly gaining acceptance in recent times, where the general public, industry
and researchers are starting to understand the utility of robots, for example for delivery to …
and researchers are starting to understand the utility of robots, for example for delivery to …
[HTML][HTML] Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable
Algorithmic decision support systems are widely applied in domains ranging from healthcare
to journalism. To ensure that these systems are fair and accountable, it is essential that …
to journalism. To ensure that these systems are fair and accountable, it is essential that …
ChatGPT rates natural language explanation quality like humans: But on which scales?
As AI becomes more integral in our lives, the need for transparency and responsibility
grows. While natural language explanations (NLEs) are vital for clarifying the reasoning …
grows. While natural language explanations (NLEs) are vital for clarifying the reasoning …
Explaining tree model decisions in natural language for network intrusion detection
Network intrusion detection (NID) systems which leverage machine learning have been
shown to have strong performance in practice when used to detect malicious network traffic …
shown to have strong performance in practice when used to detect malicious network traffic …