[HTML][HTML] AI deception: A survey of examples, risks, and potential solutions

PS Park, S Goldstein, A O'Gara, M Chen, D Hendrycks - Patterns, 2024 - cell.com
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some …

Measuring the impact of AI in the diagnosis of hospitalized patients: a randomized clinical vignette survey study

S Jabbour, D Fouhey, S Shepard, TS Valley… - Jama, 2023 - jamanetwork.com
Importance Artificial intelligence (AI) could support clinicians when diagnosing hospitalized
patients; however, systematic bias in AI models could worsen clinician diagnostic accuracy …

Understanding uncertainty: how lay decision-makers perceive and interpret uncertainty in human-AI decision making

S Prabhudesai, L Yang, S Asthana, X Huan… - Proceedings of the 28th …, 2023 - dl.acm.org
Decision Support Systems (DSS) based on Machine Learning (ML) often aim to assist lay
decision-makers, who are not math-savvy, in making high-stakes decisions. However …

Humans, ai, and context: Understanding end-users' trust in a real-world computer vision application

SSY Kim, EA Watkins, O Russakovsky, R Fong… - Proceedings of the …, 2023 - dl.acm.org
Trust is an important factor in people's interactions with AI systems. However, there is a lack
of empirical studies examining how real end-users trust or distrust the AI system they interact …

A Systematic Literature Review of Human-Centered, Ethical, and Responsible AI

M Tahaei, M Constantinides, D Quercia… - arxiv preprint arxiv …, 2023 - arxiv.org
As Artificial Intelligence (AI) continues to advance rapidly, it becomes increasingly important
to consider AI's ethical and societal implications. In this paper, we present a bottom-up …

The Role of Explainability in Collaborative Human-AI Disinformation Detection

V Schmitt, LF Villa-Arenas, NI Feldhus… - The 2024 ACM …, 2024 - dl.acm.org
Manual verification has become very challenging based on the increasing volume of
information shared online and the role of generative Artificial Intelligence (AI). Thus, AI …

Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

J Schoeffer, M De-Arteaga, N Kuehl - … of the CHI Conference on Human …, 2024 - dl.acm.org
In this work, we study the effects of feature-based explanations on distributive fairness of AI-
assisted decisions, specifically focusing on the task of predicting occupations from short …

Mindful explanations: Prevalence and impact of mind attribution in xai research

S Hindennach, L Shi, F Miletić, A Bulling - Proceedings of the ACM on …, 2024 - dl.acm.org
When users perceive AI systems as mindful, independent agents, they hold them
responsible instead of the AI experts who created and designed these systems. So far, it has …

Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant

G He, N Aishwarya, U Gadiraju - arxiv preprint arxiv:2501.17546, 2025 - arxiv.org
Explainable artificial intelligence (XAI) methods are being proposed to help interpret and
understand how AI systems reach specific predictions. Inspired by prior work on …

Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant

G He, G Demartini, U Gadiraju - arxiv preprint arxiv:2502.01390, 2025 - arxiv.org
Since the explosion in popularity of ChatGPT, large language models (LLMs) have
continued to impact our everyday lives. Equipped with external tools that are designed for a …