Reframing human-AI collaboration for generating free-text explanations

S Wiegreffe, J Hessel, S Swayamdipta, M Riedl… - arxiv preprint arxiv …, 2021 - arxiv.org
Large language models are increasingly capable of generating fluent-appearing text with
relatively little task-specific supervision. But can these models accurately explain …

Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems

Z Buçinca, P Lin, KZ Gajos, EL Glassman - Proceedings of the 25th …, 2020 - dl.acm.org
Explainable artificially intelligent (XAI) systems form part of sociotechnical systems, eg,
human+ AI teams tasked with making decisions. Yet, current XAI systems are rarely …

IEEE P7001: A proposed standard on transparency

AFT Winfield, S Booth, LA Dennis, T Egawa… - Frontiers in Robotics …, 2021 - frontiersin.org
This paper describes IEEE P7001, a new draft standard on transparency of autonomous
systems. In the paper, we outline the development and structure of the draft standard. We …

Advancing explainable autonomous vehicle systems: A comprehensive review and research roadmap

S Tekkesinoglu, A Habibovic, L Kunze - ACM Transactions on Human …, 2024 - dl.acm.org
Given the uncertainty surrounding how existing explainability methods for autonomous
vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is …

A survey of explainable AI terminology

MA Clinciu, HF Hastie - 1st Workshop on Interactive Natural …, 2019 - research.ed.ac.uk
Abstract The field of Explainable Artificial Intelligence attempts to solve the problem of
algorithmic opacity. Many terms and notions have been introduced recently to define …

A study of automatic metrics for the evaluation of natural language explanations

M Clinciu, A Eshghi, H Hastie - arxiv preprint arxiv:2103.08545, 2021 - arxiv.org
As transparency becomes key for robotics and AI, it will be necessary to evaluate the
methods through which transparency is provided, including automatically generated natural …

Transparency in hri: Trust and decision making in the face of robot errors

B Nesset, DA Robb, J Lopes, H Hastie - Companion of the 2021 ACM …, 2021 - dl.acm.org
Robots are rapidly gaining acceptance in recent times, where the general public, industry
and researchers are starting to understand the utility of robots, for example for delivery to …

[HTML][HTML] Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable

A Simkute, E Luger, B Jones, M Evans… - Journal of Responsible …, 2021 - Elsevier
Algorithmic decision support systems are widely applied in domains ranging from healthcare
to journalism. To ensure that these systems are fair and accountable, it is essential that …

ChatGPT rates natural language explanation quality like humans: But on which scales?

F Huang, H Kwak, K Park, J An - arxiv preprint arxiv:2403.17368, 2024 - arxiv.org
As AI becomes more integral in our lives, the need for transparency and responsibility
grows. While natural language explanations (NLEs) are vital for clarifying the reasoning …

Explaining tree model decisions in natural language for network intrusion detection

N Ziems, G Liu, J Flanagan, M Jiang - arxiv preprint arxiv:2310.19658, 2023 - arxiv.org
Network intrusion detection (NID) systems which leverage machine learning have been
shown to have strong performance in practice when used to detect malicious network traffic …