Algorithmic fairness in artificial intelligence for medicine and healthcare

RJ Chen, JJ Wang, DFK Williamson, TY Chen… - Nature biomedical …, 2023 - nature.com
In healthcare, the development and deployment of insufficiently fair systems of artificial
intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models …

Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies

V Lai, C Chen, A Smith-Renner, QV Liao… - Proceedings of the 2023 …, 2023 - dl.acm.org
AI systems are adopted in numerous domains due to their increasingly strong predictive
performance. However, in high-stakes domains such as criminal justice and healthcare, full …

Open problems and fundamental limitations of reinforcement learning from human feedback

S Casper, X Davies, C Shi, TK Gilbert… - arxiv preprint arxiv …, 2023 - arxiv.org
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems
to align with human goals. RLHF has emerged as the central method used to finetune state …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arxiv preprint arxiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

[PDF][PDF] Ai transparency in the age of llms: A human-centered research roadmap

QV Liao, JW Vaughan - arxiv preprint arxiv:2306.01941, 2023 - assets.pubpub.org
The rise of powerful large language models (LLMs) brings about tremendous opportunities
for innovation but also looming risks for individuals and society at large. We have reached a …

Explainable ai is dead, long live explainable ai! hypothesis-driven decision support using evaluative ai

T Miller - Proceedings of the 2023 ACM conference on fairness …, 2023 - dl.acm.org
In this paper, we argue for a paradigm shift from the current model of explainable artificial
intelligence (XAI), which may be counter-productive to better human decision making. In …

Underspecification presents challenges for credibility in modern machine learning

A D'Amour, K Heller, D Moldovan, B Adlam… - Journal of Machine …, 2022 - jmlr.org
Machine learning (ML) systems often exhibit unexpectedly poor behavior when they are
deployed in real-world domains. We identify underspecification in ML pipelines as a key …

A systematic literature review of user trust in AI-enabled systems: An HCI perspective

TA Bach, A Khan, H Hallock, G Beltrão… - International Journal of …, 2024 - Taylor & Francis
Abstract User trust in Artificial Intelligence (AI) enabled systems has been increasingly
recognized and proven as a key element to fostering adoption. It has been suggested that AI …

Causal inference in natural language processing: Estimation, prediction, interpretation and beyond

A Feder, KA Keith, E Manzoor, R Pryzant… - Transactions of the …, 2022 - direct.mit.edu
A fundamental goal of scientific research is to learn about causal relationships. However,
despite its critical role in the life and social sciences, causality has not had the same …

[HTML][HTML] Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities

R Lukyanenko, W Maass, VC Storey - Electronic Markets, 2022 - Springer
With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount
societal concern. Despite increased attention of researchers, the topic remains fragmented …