Theo dõi
Scott Lundberg
Scott Lundberg
Google DeepMind
Email được xác minh tại google.com - Trang chủ
Tiêu đề
Trích dẫn bởi
Trích dẫn bởi
Năm
A unified approach to interpreting model predictions
SM Lundberg, SI Lee
Advances in neural information processing systems 30, 2017
315382017
From local explanations to global understanding with explainable AI for trees
SM Lundberg, G Erion, H Chen, A DeGrave, JM Prutkin, B Nair, R Katz, ...
Nature machine intelligence 2 (1), 56-67, 2020
56072020
Sparks of artificial general intelligence: Early experiments with gpt-4
S Bubeck, V Chadrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ...
ArXiv, 2023
37952023
Consistent individualized feature attribution for tree ensembles
SM Lundberg, GG Erion, SI Lee
arXiv preprint arXiv:1802.03888, 2018
23142018
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery
SM Lundberg, B Nair, MS Vavilala, M Horibe, MJ Eisses, T Adams, ...
Nature biomedical engineering 2 (10), 749-760, 2018
16542018
Understanding global feature contributions with additive importance measures
I Covert, SM Lundberg, SI Lee
Advances in Neural Information Processing Systems 33, 17212-17223, 2020
4282020
Explainable AI for trees: From local explanations to global understanding
SM Lundberg, G Erion, H Chen, A DeGrave, JM Prutkin, B Nair, R Katz, ...
arXiv preprint arXiv:1905.04610, 2019
4152019
A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia
SI Lee, S Celik, BA Logsdon, SM Lundberg, TJ Martins, VG Oehler, ...
Nature communications 9 (1), 42, 2018
3762018
Explaining by removing: A unified framework for model explanation
I Covert, S Lundberg, SI Lee
Journal of Machine Learning Research 22 (209), 1-90, 2021
3002021
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G Erion, JD Janizek, P Sturmfels, SM Lundberg, SI Lee
Nature machine intelligence 3 (7), 620-631, 2021
2652021
Visualizing the impact of feature attribution baselines
P Sturmfels, S Lundberg, SI Lee
Distill 5 (1), e22, 2020
2492020
Algorithms to estimate Shapley value feature attributions
H Chen, IC Covert, SM Lundberg, SI Lee
Nature Machine Intelligence 5 (6), 590-601, 2023
2222023
Art: Automatic multi-step reasoning and tool-use for large language models
B Paranjape, S Lundberg, S Singh, H Hajishirzi, L Zettlemoyer, ...
arXiv preprint arXiv:2303.09014, 2023
1982023
Consistent feature attribution for tree ensembles
SM Lundberg, SI Lee
arXiv preprint arXiv:1706.06060, 2017
1952017
True to the model or true to the data?
H Chen, JD Janizek, S Lundberg, SI Lee
arXiv preprint arXiv:2006.16234, 2020
1922020
From local explanations to global understanding with explainable AI for trees, Nat. Mach. Intell., 2, 56–67
SM Lundberg, G Erion, H Chen, A DeGrave, JM Prutkin, B Nair, R Katz, ...
1822020
An unexpected unity among methods for interpreting model predictions
S Lundberg, SI Lee
arXiv preprint arXiv:1611.07478, 2016
1662016
Explaining models by propagating Shapley values of local components
H Chen, S Lundberg, SI Lee
Explainable AI in Healthcare and Medicine: Building a Culture of …, 2021
1462021
Explaining a series of models by propagating Shapley values
H Chen, SM Lundberg, SI Lee
Nature communications 13 (1), 4512, 2022
1332022
Shapley flow: A graph-based approach to interpreting model predictions
J Wang, J Wiens, S Lundberg
International Conference on Artificial Intelligence and Statistics, 721-729, 2021
1232021
Hệ thống không thể thực hiện thao tác ngay bây giờ. Hãy thử lại sau.
Bài viết 1–20