Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies

V Lai, C Chen, A Smith-Renner, QV Liao… - Proceedings of the 2023 …, 2023 - dl.acm.org
AI systems are adopted in numerous domains due to their increasingly strong predictive
performance. However, in high-stakes domains such as criminal justice and healthcare, full …

Towards human-centered explainable ai: A survey of user studies for model explanations

Y Rong, T Leemann, TT Nguyen… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …

Large legal fictions: Profiling legal hallucinations in large language models

M Dahl, V Magesh, M Suzgun… - Journal of Legal Analysis, 2024 - academic.oup.com
Do large language models (LLMs) know the law? LLMs are increasingly being used to
augment legal practice, education, and research, yet their revolutionary potential is …

Explanations can reduce overreliance on ai systems during decision-making

H Vasconcelos, M Jörke… - Proceedings of the …, 2023 - dl.acm.org
Prior work has identified a resilient phenomenon that threatens the performance of human-
AI decision-making teams: overreliance, when people agree with an AI, even when it is …

Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models

P Vaithilingam, T Zhang, EL Glassman - Chi conference on human …, 2022 - dl.acm.org
Recent advances in Large Language Models (LLM) have made automatic code generation
possible for real-world programming tasks in general-purpose programming languages …

The role of explainable AI in the context of the AI Act

C Panigutti, R Hamon, I Hupont… - Proceedings of the …, 2023 - dl.acm.org
The proposed EU regulation for Artificial Intelligence (AI), the AI Act, has sparked some
debate about the role of explainable AI (XAI) in high-risk AI systems. Some argue that black …

[PDF][PDF] Ai transparency in the age of llms: A human-centered research roadmap

QV Liao, JW Vaughan - arxiv preprint arxiv:2306.01941, 2023 - assets.pubpub.org
The rise of powerful large language models (LLMs) brings about tremendous opportunities
for innovation but also looming risks for individuals and society at large. We have reached a …

Explainable ai is dead, long live explainable ai! hypothesis-driven decision support using evaluative ai

T Miller - Proceedings of the 2023 ACM conference on fairness …, 2023 - dl.acm.org
In this paper, we argue for a paradigm shift from the current model of explainable artificial
intelligence (XAI), which may be counter-productive to better human decision making. In …

Understanding the role of human intuition on reliance in human-AI decision-making with explanations

V Chen, QV Liao, J Wortman Vaughan… - Proceedings of the ACM …, 2023 - dl.acm.org
AI explanations are often mentioned as a way to improve human-AI decision-making, but
empirical studies have not found consistent evidence of explanations' effectiveness and, on …

" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

SSY Kim, QV Liao, M Vorvoreanu, S Ballard… - Proceedings of the …, 2024 - dl.acm.org
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …