Algorithmic fairness in artificial intelligence for medicine and healthcare

RJ Chen, JJ Wang, DFK Williamson, TY Chen… - Nature biomedical …, 2023‏ - nature.com
In healthcare, the development and deployment of insufficiently fair systems of artificial
intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models …

A review on fairness in machine learning

D Pessach, E Shmueli - ACM Computing Surveys (CSUR), 2022‏ - dl.acm.org
An increasing number of decisions regarding the daily lives of human beings are being
controlled by artificial intelligence and machine learning (ML) algorithms in spheres ranging …

Capabilities of gpt-4 on medical challenge problems

H Nori, N King, SM McKinney, D Carignan… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Large language models (LLMs) have demonstrated remarkable capabilities in natural
language understanding and generation across various domains, including medicine. We …

[HTML][HTML] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

S Ali, T Abuhmed, S El-Sappagh, K Muhammad… - Information fusion, 2023‏ - Elsevier
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated
applications, but the outcomes of many AI models are challenging to comprehend and trust …

Last layer re-training is sufficient for robustness to spurious correlations

P Kirichenko, P Izmailov, AG Wilson - arxiv preprint arxiv:2204.02937, 2022‏ - arxiv.org
Neural network classifiers can largely rely on simple spurious features, such as
backgrounds, to make predictions. However, even in these cases, we show that they still …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Just train twice: Improving group robustness without training group information

EZ Liu, B Haghgoo, AS Chen… - International …, 2021‏ - proceedings.mlr.press
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …

Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024‏ - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

Explainable ai: A review of machine learning interpretability methods

P Linardatos, V Papastefanopoulos, S Kotsiantis - Entropy, 2020‏ - mdpi.com
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption,
with machine learning systems demonstrating superhuman performance in a significant …