Explainability pitfalls: Beyond dark patterns in explainable AI

U Ehsan, MO Riedl - Patterns, 2024 - cell.com
To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful
effects is important. In this paper, we address an important yet unarticulated type of negative …

Designing creative AI partners with COFI: A framework for modeling interaction in human-AI co-creative systems

J Rezwana, ML Maher - ACM Transactions on Computer-Human …, 2023 - dl.acm.org
Human-AI co-creativity involves both humans and AI collaborating on a shared creative
product as partners. In a creative collaboration, interaction dynamics, such as turn-taking …

Literature reviews in HCI: A review of reviews

E Stefanidi, M Bentvelzen, PW Woźniak… - Proceedings of the …, 2023 - dl.acm.org
This paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear
conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various …

Charting the sociotechnical gap in explainable ai: A framework to address the gap in xai

U Ehsan, K Saha, M De Choudhury… - Proceedings of the ACM …, 2023 - dl.acm.org
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the
sociotechnical gap-divide between the technical affordances and the social needs …

The who in explainable ai: How ai background shapes perceptions of ai explanations

U Ehsan, S Passi, QV Liao, L Chan, I Lee, M Muller… - Ar**v, 2021 - par.nsf.gov
UPOL EHSAN, Georgia Institute of Technology, USA SAMIR PASSI, Cornell University, USA
Q. VERA LIAO, IBM Research AI, USA LARRY CHAN, Georgia Institute of Technology, USA I …

[HTML][HTML] Human-in-the-loop machine learning: Reconceptualizing the role of the user in interactive approaches

O Gómez-Carmona, D Casado-Mansilla… - Internet of Things, 2024 - Elsevier
The rise of intelligent systems and smart spaces has opened up new opportunities for
human–machine collaborations. Interactive Machine Learning (IML) contribute to fostering …

Modeling, replicating, and predicting human behavior: a survey

A Fuchs, A Passarella, M Conti - ACM Transactions on Autonomous and …, 2023 - dl.acm.org
Given the popular presupposition of human reasoning as the standard for learning and
decision making, there have been significant efforts and a growing trend in research to …

The Who in XAI: How AI Background Shapes Perceptions of AI Explanations

U Ehsan, S Passi, QV Liao, L Chan, IH Lee… - Proceedings of the CHI …, 2024 - dl.acm.org
Explainability of AI systems is critical for users to take informed actions. Understanding who
opens the black-box of AI is just as important as opening it. We conduct a mixed-methods …

Ganslider: How users control generative models for images using multiple sliders with and without feedforward information

H Dang, L Mecke, D Buschek - Proceedings of the 2022 CHI Conference …, 2022 - dl.acm.org
We investigate how multiple sliders with and without feedforward visualizations influence
users' control of generative models. In an online study (N= 138), we collected a dataset of …

A literature survey of how to convey transparency in co-located human–robot interaction

SY Schött, RM Amin, A Butz - Multimodal Technologies and Interaction, 2023 - mdpi.com
In human–robot interaction, transparency is essential to ensure that humans understand and
trust robots. Understanding is vital from an ethical perspective and benefits interaction, eg …