Towards bidirectional human-ai alignment: A systematic review for clarifications, framework, and future directions

H Shen, T Knearem, R Ghosh, K Alkiek… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in general-purpose AI have highlighted the importance of guiding AI
systems towards the intended goals, ethical principles, and values of individuals and …

A critical survey on fairness benefits of explainable AI

L Deck, J Schoeffer, M De-Arteaga, N Kühl - Proceedings of the 2024 …, 2024 - dl.acm.org
In this critical survey, we analyze typical claims on the relationship between explainable AI
(XAI) and fairness to disentangle the multidimensional relationship between these two …

" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

SSY Kim, QV Liao, M Vorvoreanu, S Ballard… - Proceedings of the …, 2024 - dl.acm.org
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …

Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's Advocate

CW Chiang, Z Lu, Z Li, M Yin - … of the 29th International Conference on …, 2024 - dl.acm.org
Group decision making plays a crucial role in our complex and interconnected world. The
rise of AI technologies has the potential to provide data-driven insights to facilitate group …

In search of verifiability: Explanations rarely enable complementary performance in AI‐advised decision making

R Fok, DS Weld - AI Magazine, 2024 - Wiley Online Library
The current literature on AI‐advised decision making—involving explainable AI systems
advising human decision makers—presents a series of inconclusive and confounding …

Hill: A hallucination identifier for large language models

F Leiser, S Eckhardt, V Leuthe, M Knaeble… - Proceedings of the …, 2024 - dl.acm.org
Large language models (LLMs) are prone to hallucinations, ie, nonsensical, unfaithful, and
undesirable text. Users tend to overrely on LLMs and corresponding hallucinations which …

“Are you really sure?” Understanding the effects of human self-confidence calibration in AI-assisted decision making

S Ma, X Wang, Y Lei, C Shi, M Yin, X Ma - Proceedings of the 2024 CHI …, 2024 - dl.acm.org
In AI-assisted decision-making, it is crucial but challenging for humans to achieve
appropriate reliance on AI. This paper approaches this problem from a human-centered …

Leveraging chatgpt for automated human-centered explanations in recommender systems

Í Silva, L Marinho, A Said, MC Willemsen - Proceedings of the 29th …, 2024 - dl.acm.org
The adoption of recommender systems (RSs) in various domains has become increasingly
popular, but concerns have been raised about their lack of transparency and interpretability …

The impact of imperfect XAI on human-AI decision-making

K Morrison, P Spitzer, V Turri, M Feng, N Kühl… - Proceedings of the …, 2024 - dl.acm.org
Explainability techniques are rapidly being developed to improve human-AI decision-
making across various cooperative work settings. Consequently, previous research has …

Does more advice help? The effects of second opinions in AI-assisted decision making

Z Lu, D Wang, M Yin - Proceedings of the ACM on Human-Computer …, 2024 - dl.acm.org
AI assistance in decision-making has become popular, yet people's inappropriate reliance
on AI often leads to unsatisfactory human-AI collaboration performance. In this paper …