From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai

M Nauta, J Trienes, S Pathak, E Nguyen… - ACM Computing …, 2023 - dl.acm.org
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …

Algorithms to estimate Shapley value feature attributions

H Chen, IC Covert, SM Lundberg, SI Lee - Nature Machine Intelligence, 2023 - nature.com
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …

Foundation models and fair use

P Henderson, X Li, D Jurafsky, T Hashimoto… - Journal of Machine …, 2023 - jmlr.org
Existing foundation models are trained on copyrighted material. Deploying these models
can pose both legal and ethical risks when data creators fail to receive appropriate …

Interpretable machine learning–a brief history, state-of-the-art and challenges

C Molnar, G Casalicchio, B Bischl - Joint European conference on …, 2020 - Springer
We present a brief history of the field of interpretable machine learning (IML), give an
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …

Opportunities and challenges in explainable artificial intelligence (xai): A survey

A Das, P Rad - arxiv preprint arxiv:2006.11371, 2020 - arxiv.org
Nowadays, deep neural networks are widely used in mission critical systems such as
healthcare, self-driving vehicles, and military which have direct impact on human lives …

The shapley value in machine learning

B Rozemberczki, L Watson, P Bayer, HT Yang… - arxiv preprint arxiv …, 2022 - arxiv.org
Over the last few years, the Shapley value, a solution concept from cooperative game theory,
has found numerous applications in machine learning. In this paper, we first discuss …

From local explanations to global understanding with explainable AI for trees

SM Lundberg, G Erion, H Chen, A DeGrave… - Nature machine …, 2020 - nature.com
Tree-based machine learning models such as random forests, decision trees and gradient
boosted trees are popular nonlinear predictive models, yet comparatively little attention has …

Explainability in deep reinforcement learning

A Heuillet, F Couthouis, N Díaz-Rodríguez - Knowledge-Based Systems, 2021 - Elsevier
A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature
relevance techniques to explain a deep neural network (DNN) output or explaining models …

Can HR adapt to the paradoxes of artificial intelligence?

A Charlwood, N Guenole - Human Resource Management …, 2022 - Wiley Online Library
Artificial intelligence (AI) is widely heralded as a new and revolutionary technology that will
transform the world of work. While the impact of AI on human resource (HR) and people …

[書籍][B] Interpretable machine learning

C Molnar - 2020 - books.google.com
This book is about making machine learning models and their decisions interpretable. After
exploring the concepts of interpretability, you will learn about simple, interpretable models …