Algorithms to estimate Shapley value feature attributions

H Chen, IC Covert, SM Lundberg, SI Lee - Nature Machine Intelligence, 2023 - nature.com
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …

What is human-centered about human-centered AI? A map of the research landscape

T Capel, M Brereton - Proceedings of the 2023 CHI conference on …, 2023 - dl.acm.org
The application of Artificial Intelligence (AI) across a wide range of domains comes with both
high expectations of its benefits and dire predictions of misuse. While AI systems have …

Explanations can reduce overreliance on ai systems during decision-making

H Vasconcelos, M Jörke… - Proceedings of the …, 2023 - dl.acm.org
Prior work has identified a resilient phenomenon that threatens the performance of human-
AI decision-making teams: overreliance, when people agree with an AI, even when it is …

Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support

A Sharma, IW Lin, AS Miner, DC Atkins… - Nature Machine …, 2023 - nature.com
Advances in artificial intelligence (AI) are enabling systems that augment and collaborate
with humans to perform simple, mechanistic tasks such as scheduling meetings and …

[PDF][PDF] Ai transparency in the age of llms: A human-centered research roadmap

QV Liao, JW Vaughan - arxiv preprint arxiv:2306.01941, 2023 - assets.pubpub.org
The rise of powerful large language models (LLMs) brings about tremendous opportunities
for innovation but also looming risks for individuals and society at large. We have reached a …

Rethinking interpretability in the era of large language models

C Singh, JP Inala, M Galley, R Caruana… - arxiv preprint arxiv …, 2024 - arxiv.org
Interpretable machine learning has exploded as an area of interest over the last decade,
sparked by the rise of increasingly large datasets and deep neural networks …

Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies

V Lai, C Chen, A Smith-Renner, QV Liao… - Proceedings of the 2023 …, 2023 - dl.acm.org
AI systems are adopted in numerous domains due to their increasingly strong predictive
performance. However, in high-stakes domains such as criminal justice and healthcare, full …

Understanding the role of human intuition on reliance in human-AI decision-making with explanations

V Chen, QV Liao, J Wortman Vaughan… - Proceedings of the ACM …, 2023 - dl.acm.org
AI explanations are often mentioned as a way to improve human-AI decision-making, but
empirical studies have not found consistent evidence of explanations' effectiveness and, on …

" help me help the ai": Understanding how explainability can support human-ai interaction

SSY Kim, EA Watkins, O Russakovsky, R Fong… - Proceedings of the …, 2023 - dl.acm.org
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …

" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

SSY Kim, QV Liao, M Vorvoreanu, S Ballard… - Proceedings of the …, 2024 - dl.acm.org
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …