Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies

V Lai, C Chen, A Smith-Renner, QV Liao… - Proceedings of the 2023 …, 2023‏ - dl.acm.org
AI systems are adopted in numerous domains due to their increasingly strong predictive
performance. However, in high-stakes domains such as criminal justice and healthcare, full …

Towards human-centered explainable ai: A survey of user studies for model explanations

Y Rong, T Leemann, TT Nguyen… - IEEE transactions on …, 2023‏ - ieeexplore.ieee.org
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …

Explanations can reduce overreliance on ai systems during decision-making

H Vasconcelos, M Jörke… - Proceedings of the …, 2023‏ - dl.acm.org
Prior work has identified a resilient phenomenon that threatens the performance of human-
AI decision-making teams: overreliance, when people agree with an AI, even when it is …

Humans inherit artificial intelligence biases

L Vicente, H Matute - Scientific reports, 2023‏ - nature.com
Artificial intelligence recommendations are sometimes erroneous and biased. In our
research, we hypothesized that people who perform a (simulated) medical diagnostic task …

Understanding the role of human intuition on reliance in human-AI decision-making with explanations

V Chen, QV Liao, J Wortman Vaughan… - Proceedings of the ACM …, 2023‏ - dl.acm.org
AI explanations are often mentioned as a way to improve human-AI decision-making, but
empirical studies have not found consistent evidence of explanations' effectiveness and, on …

" help me help the ai": Understanding how explainability can support human-ai interaction

SSY Kim, EA Watkins, O Russakovsky, R Fong… - Proceedings of the …, 2023‏ - dl.acm.org
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …

" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

SSY Kim, QV Liao, M Vorvoreanu, S Ballard… - Proceedings of the …, 2024‏ - dl.acm.org
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …

To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making

Z Buçinca, MB Malaya, KZ Gajos - Proceedings of the ACM on Human …, 2021‏ - dl.acm.org
People supported by AI-powered decision support tools frequently overrely on the AI: they
accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the …

Human-llm collaborative annotation through effective verification of llm labels

X Wang, H Kim, S Rahman, K Mitra… - Proceedings of the 2024 …, 2024‏ - dl.acm.org
Large language models (LLMs) have shown remarkable performance across various natural
language processing (NLP) tasks, indicating their significant potential as data annotators …

Appropriate reliance on AI advice: Conceptualization and the effect of explanations

M Schemmer, N Kuehl, C Benz, A Bartos… - Proceedings of the 28th …, 2023‏ - dl.acm.org
AI advice is becoming increasingly popular, eg, in investment and medical treatment
decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to …