Foundations & trends in multimodal machine learning: Principles, challenges, and open questions

PP Liang, A Zadeh, LP Morency - ACM Computing Surveys, 2024 - dl.acm.org
Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design
computer agents with intelligent capabilities such as understanding, reasoning, and learning …

Algorithms to estimate Shapley value feature attributions

H Chen, IC Covert, SM Lundberg, SI Lee - Nature Machine Intelligence, 2023 - nature.com
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …

Language in a bottle: Language model guided concept bottlenecks for interpretable image classification

Y Yang, A Panagopoulou, S Zhou… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Concept Bottleneck Models (CBM) are inherently interpretable models that factor
model decisions into human-readable concepts. They allow people to easily understand …

[HTML][HTML] Transparency of deep neural networks for medical image analysis: A review of interpretability methods

Z Salahuddin, HC Woodruff, A Chatterjee… - Computers in biology and …, 2022 - Elsevier
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for
diagnosis and treatment decisions. Deep neural networks have shown the same or better …

Interpretable and explainable machine learning: A methods‐centric overview with concrete examples

R Marcinkevičs, JE Vogt - Wiley Interdisciplinary Reviews: Data …, 2023 - Wiley Online Library
Interpretability and explainability are crucial for machine learning (ML) and statistical
applications in medicine, economics, law, and natural sciences and form an essential …

Concept embedding models: Beyond the accuracy-explainability trade-off

M Espinosa Zarlenga, P Barbiero… - Advances in …, 2022 - proceedings.neurips.cc
Deploying AI-powered systems requires trustworthy models supporting effective human
interactions, going beyond raw prediction accuracy. Concept bottleneck models promote …

Toward transparent ai: A survey on interpreting the inner structures of deep neural networks

T Räuker, A Ho, S Casper… - 2023 ieee conference …, 2023 - ieeexplore.ieee.org
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …

Interpretable machine learning: Fundamental principles and 10 grand challenges

C Rudin, C Chen, Z Chen, H Huang… - Statistic …, 2022 - projecteuclid.org
Interpretability in machine learning (ML) is crucial for high stakes decisions and
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …

From attribution maps to human-understandable explanations through concept relevance propagation

R Achtibat, M Dreyer, I Eisenbraun, S Bosse… - Nature Machine …, 2023 - nature.com
The field of explainable artificial intelligence (XAI) aims to bring transparency to today's
powerful but opaque deep learning models. While local XAI methods explain individual …

" help me help the ai": Understanding how explainability can support human-ai interaction

SSY Kim, EA Watkins, O Russakovsky, R Fong… - Proceedings of the …, 2023 - dl.acm.org
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …