Towards human-centered explainable ai: A survey of user studies for model explanations

Y Rong, T Leemann, TT Nguyen… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …

Multiviz: Towards visualizing and understanding multimodal models

PP Liang, Y Lyu, G Chhablani, N Jain, Z Deng… - arxiv preprint arxiv …, 2022 - arxiv.org
The promise of multimodal models for real-world applications has inspired research in
visualizing and understanding their internal mechanics with the end goal of empowering …

Human interpretation of saliency-based explanation over text

H Schuff, A Jacovi, H Adel, Y Goldberg… - Proceedings of the 2022 …, 2022 - dl.acm.org
While a lot of research in explainable AI focuses on producing effective explanations, less
work is devoted to the question of how people understand and interpret the explanation. In …

Recent Developments on Accountability and Explainability for Complex Reasoning Tasks

P Atanasova - Accountable and Explainable Methods for Complex …, 2024 - Springer
This chapter delves into the recent accountability tools tailored for the evolving landscape of
machine learning models for complex reasoning tasks. With the increasing integration of …

Learning to scaffold: Optimizing model explanations for teaching

P Fernandes, M Treviso, D Pruthi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Modern machine learning models are opaque, and as a result there is a burgeoning
academic subfield on methods that explain these models' behavior. However, what is the …

Saliency map verbalization: Comparing feature importance representations from model-free and instruction-based methods

N Feldhus, L Hennig, MD Nasert, C Ebert… - arxiv preprint arxiv …, 2022 - arxiv.org
Saliency maps can explain a neural model's predictions by identifying important input
features. They are difficult to interpret for laypeople, especially for instances with many …

Mediators: Conversational agents explaining nlp model behavior

N Feldhus, AM Ravichandran, S Möller - arxiv preprint arxiv:2206.06029, 2022 - arxiv.org
The human-centric explainable artificial intelligence (HCXAI) community has raised the
need for framing the explanation process as a conversation between human and machine …

Silent vulnerable dependency alert prediction with vulnerability key aspect explanation

J Sun, Z **ng, Q Lu, X Xu, L Zhu… - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
Due to convenience, open-source software is widely used. For beneficial reasons, open-
source maintainers often fix the vulnerabilities silently, exposing their users unaware of the …

Explaining speech classification models via word-level audio segments and paralinguistic features

E Pastor, A Koudounas, G Attanasio, D Hovy… - arxiv preprint arxiv …, 2023 - arxiv.org
Recent advances in eXplainable AI (XAI) have provided new insights into how models for
vision, language, and tabular data operate. However, few approaches exist for …

Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary

Z Li, M Yin - Advances in Neural Information Processing …, 2025 - proceedings.neurips.cc
Recent advances in AI models have increased the integration of AI-based decision aids into
the human decision making process. To fully unlock the potential of AI-assisted decision …