Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies
AI systems are adopted in numerous domains due to their increasingly strong predictive
performance. However, in high-stakes domains such as criminal justice and healthcare, full …
performance. However, in high-stakes domains such as criminal justice and healthcare, full …
Towards human-centered explainable ai: A survey of user studies for model explanations
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …
better understanding of the needs of XAI users, as well as human-centered evaluations of …
Explanations can reduce overreliance on ai systems during decision-making
Prior work has identified a resilient phenomenon that threatens the performance of human-
AI decision-making teams: overreliance, when people agree with an AI, even when it is …
AI decision-making teams: overreliance, when people agree with an AI, even when it is …
Humans inherit artificial intelligence biases
Artificial intelligence recommendations are sometimes erroneous and biased. In our
research, we hypothesized that people who perform a (simulated) medical diagnostic task …
research, we hypothesized that people who perform a (simulated) medical diagnostic task …
Understanding the role of human intuition on reliance in human-AI decision-making with explanations
AI explanations are often mentioned as a way to improve human-AI decision-making, but
empirical studies have not found consistent evidence of explanations' effectiveness and, on …
empirical studies have not found consistent evidence of explanations' effectiveness and, on …
" help me help the ai": Understanding how explainability can support human-ai interaction
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …
users' explainability needs and behaviors around XAI explanations. To address this gap and …
" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …
outputs, potentially misleading users who may rely on them as if they were correct. To …
To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making
People supported by AI-powered decision support tools frequently overrely on the AI: they
accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the …
accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the …
Human-llm collaborative annotation through effective verification of llm labels
Large language models (LLMs) have shown remarkable performance across various natural
language processing (NLP) tasks, indicating their significant potential as data annotators …
language processing (NLP) tasks, indicating their significant potential as data annotators …
Appropriate reliance on AI advice: Conceptualization and the effect of explanations
AI advice is becoming increasingly popular, eg, in investment and medical treatment
decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to …
decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to …