Using large language models in psychology

D Demszky, D Yang, DS Yeager, CJ Bryan… - Nature Reviews …, 2023 - nature.com
Large language models (LLMs), such as OpenAI's GPT-4, Google's Bard or Meta's LLaMa,
have created unprecedented opportunities for analysing and generating language data on a …

[PDF][PDF] Ai transparency in the age of llms: A human-centered research roadmap

QV Liao, JW Vaughan - arxiv preprint arxiv:2306.01941, 2023 - assets.pubpub.org
The rise of powerful large language models (LLMs) brings about tremendous opportunities
for innovation but also looming risks for individuals and society at large. We have reached a …

Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback

HR Kirk, B Vidgen, P Röttger, SA Hale - arxiv preprint arxiv:2303.05453, 2023 - arxiv.org
Large language models (LLMs) are used to generate content for a wide range of tasks, and
are set to reach a growing audience in coming years due to integration in product interfaces …

Human‐centered explainable artificial intelligence: An Annual Review of Information Science and Technology (ARIST) paper

M Ridley - Journal of the Association for Information Science …, 2025 - Wiley Online Library
Explainability is central to trust and accountability in artificial intelligence (AI) applications.
The field of human‐centered explainable AI (HCXAI) arose as a response to mainstream …

Time2Stop: Adaptive and Explainable Human-AI Loop for Smartphone Overuse Intervention

A Orzikulova, H **ao, Z Li, Y Yan, Y Wang… - Proceedings of the CHI …, 2024 - dl.acm.org
Despite a rich history of investigating smartphone overuse intervention techniques, AI-based
just-in-time adaptive intervention (JITAI) methods for overuse reduction are lacking. We …

Human-LLM collaborative annotation through effective verification of LLM labels

X Wang, H Kim, S Rahman, K Mitra… - Proceedings of the CHI …, 2024 - dl.acm.org
Large language models (LLMs) have shown remarkable performance across various natural
language processing (NLP) tasks, indicating their significant potential as data annotators …

Enhancing Transformers without Self-supervised Learning: A Loss Landscape Perspective in Sequential Recommendation

V Lai, H Chen, CCM Yeh, M Xu, Y Cai… - Proceedings of the 17th …, 2023 - dl.acm.org
Transformer and its variants are a powerful class of architectures for sequential
recommendation, owing to their ability of capturing a user's dynamic interests from their past …

Watchat: Explaining perplexing programs by debugging mental models

K Chandra, KM Collins, W Crichton, T Chen… - arxiv preprint arxiv …, 2024 - arxiv.org
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's
code. But sometimes, an even better explanation is a bug in the programmer's mental model …

Towards Balancing Preference and Performance through Adaptive Personalized Explainability

A Silva, P Tambwekar, M Schrum… - Proceedings of the 2024 …, 2024 - dl.acm.org
As robots and digital assistants are deployed in the real world, these agents must be able to
communicate their decision-making criteria to build trust, improve human-robot teaming, and …

The Explanation That Hits Home: The Characteristics of Verbal Explanations That Affect Human Perception in Subjective Decision-Making

S Ferguson, PA Aoyagui, R Rizvi, YH Kim… - Proceedings of the …, 2024 - dl.acm.org
Human-AI collaborative decision-making can achieve better outcomes than either party
individually. The success of this collaboration can depend on whether the human decision …