Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare

F Imrie, R Davis, M van der Schaar - Nature Machine Intelligence, 2023 - nature.com
Applications of machine learning are becoming increasingly common in medicine and
healthcare, enabling more accurate predictive models. However, this often comes at the cost …

Advances of machine learning in materials science: Ideas and techniques

SS Chong, YS Ng, HQ Wang, JC Zheng - Frontiers of Physics, 2024 - Springer
In this big data era, the use of large dataset in conjunction with machine learning (ML) has
been increasingly popular in both industry and academia. In recent times, the field of …

Concept activation regions: A generalized framework for concept-based explanations

J Crabbé, M van der Schaar - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Concept-based explanations permit to understand the predictions of a deep neural
network (DNN) through the lens of concepts specified by users. Existing methods assume …

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

MR Taesiri, G Nguyen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Explaining artificial intelligence (AI) predictions is increasingly important and even
imperative in many high-stake applications where humans are the ultimate decision-makers …

Personalising intravenous to oral antibiotic switch decision making through fair interpretable machine learning

WJ Bolton, R Wilson, M Gilchrist, P Georgiou… - Nature …, 2024 - nature.com
Antimicrobial resistance (AMR) and healthcare associated infections pose a significant
threat globally. One key prevention strategy is to follow antimicrobial stewardship practices …

Accountability in offline reinforcement learning: Explaining decisions with a corpus of examples

H Sun, A Hüyük, D Jarrett… - Advances in Neural …, 2023 - proceedings.neurips.cc
Learning controllers with offline data in decision-making systems is an essential area of
research due to its potential to reduce the risk of applications in real-world systems …

What is flagged in uncertainty quantification? latent density models for uncertainty categorization

H Sun, B van Breugel, J Crabbé… - Advances in …, 2023 - proceedings.neurips.cc
Uncertainty quantification (UQ) is essential for creating trustworthy machine learning
models. Recent years have seen a steep rise in UQ methods that can flag suspicious …

AutoPrognosis 2.0: Democratizing diagnostic and prognostic modeling in healthcare with automated machine learning

F Imrie, B Cebere, EF McKinney… - PLOS Digital …, 2023 - journals.plos.org
Diagnostic and prognostic models are increasingly important in medicine and inform many
clinical decisions. Recently, machine learning approaches have shown improvement over …

Bridging the worlds of pharmacometrics and machine learning

K Stankevičiūtė, JB Woillard, RW Peck… - Clinical …, 2023 - Springer
Precision medicine requires individualized modeling of disease and drug dynamics, with
machine learning-based computational techniques gaining increasing popularity. The …

Benchmarking heterogeneous treatment effect models through the lens of interpretability

J Crabbé, A Curth, I Bica… - Advances in Neural …, 2022 - proceedings.neurips.cc
Estimating personalized effects of treatments is a complex, yet pervasive problem. To tackle
it, recent developments in the machine learning (ML) literature on heterogeneous treatment …