Using large language models in psychology
Large language models (LLMs), such as OpenAI's GPT-4, Google's Bard or Meta's LLaMa,
have created unprecedented opportunities for analysing and generating language data on a …
have created unprecedented opportunities for analysing and generating language data on a …
[PDF][PDF] Ai transparency in the age of llms: A human-centered research roadmap
QV Liao, JW Vaughan - arxiv preprint arxiv:2306.01941, 2023 - assets.pubpub.org
The rise of powerful large language models (LLMs) brings about tremendous opportunities
for innovation but also looming risks for individuals and society at large. We have reached a …
for innovation but also looming risks for individuals and society at large. We have reached a …
Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback
Large language models (LLMs) are used to generate content for a wide range of tasks, and
are set to reach a growing audience in coming years due to integration in product interfaces …
are set to reach a growing audience in coming years due to integration in product interfaces …
Human‐centered explainable artificial intelligence: An Annual Review of Information Science and Technology (ARIST) paper
M Ridley - Journal of the Association for Information Science …, 2025 - Wiley Online Library
Explainability is central to trust and accountability in artificial intelligence (AI) applications.
The field of human‐centered explainable AI (HCXAI) arose as a response to mainstream …
The field of human‐centered explainable AI (HCXAI) arose as a response to mainstream …
Time2Stop: Adaptive and Explainable Human-AI Loop for Smartphone Overuse Intervention
Despite a rich history of investigating smartphone overuse intervention techniques, AI-based
just-in-time adaptive intervention (JITAI) methods for overuse reduction are lacking. We …
just-in-time adaptive intervention (JITAI) methods for overuse reduction are lacking. We …
Human-LLM collaborative annotation through effective verification of LLM labels
Large language models (LLMs) have shown remarkable performance across various natural
language processing (NLP) tasks, indicating their significant potential as data annotators …
language processing (NLP) tasks, indicating their significant potential as data annotators …
Enhancing Transformers without Self-supervised Learning: A Loss Landscape Perspective in Sequential Recommendation
Transformer and its variants are a powerful class of architectures for sequential
recommendation, owing to their ability of capturing a user's dynamic interests from their past …
recommendation, owing to their ability of capturing a user's dynamic interests from their past …
Watchat: Explaining perplexing programs by debugging mental models
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's
code. But sometimes, an even better explanation is a bug in the programmer's mental model …
code. But sometimes, an even better explanation is a bug in the programmer's mental model …
Towards Balancing Preference and Performance through Adaptive Personalized Explainability
As robots and digital assistants are deployed in the real world, these agents must be able to
communicate their decision-making criteria to build trust, improve human-robot teaming, and …
communicate their decision-making criteria to build trust, improve human-robot teaming, and …
The Explanation That Hits Home: The Characteristics of Verbal Explanations That Affect Human Perception in Subjective Decision-Making
Human-AI collaborative decision-making can achieve better outcomes than either party
individually. The success of this collaboration can depend on whether the human decision …
individually. The success of this collaboration can depend on whether the human decision …