Towards human-centered explainable ai: A survey of user studies for model explanations

Y Rong, T Leemann, TT Nguyen… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …

Mitigating bias in algorithmic systems—a fish-eye view

K Orphanou, J Otterbacher, S Kleanthous… - ACM Computing …, 2022 - dl.acm.org
Mitigating bias in algorithmic systems is a critical issue drawing attention across
communities within the information and computer sciences. Given the complexity of the …

Aligning eyes between humans and deep neural network through interactive attention alignment

Y Gao, TS Sun, L Zhao, SR Hong - Proceedings of the ACM on Human …, 2022 - dl.acm.org
While Deep Neural Networks (DNNs) are deriving the major innovations through their
powerful automation, we are also witnessing the peril behind automation as a form of bias …

How can explainability methods be used to support bug identification in computer vision models?

A Balayn, N Rikalo, C Lofi, J Yang… - Proceedings of the 2022 …, 2022 - dl.acm.org
Deep learning models for image classification suffer from dangerous issues often
discovered after deployment. The process of identifying bugs that cause these issues …

Automatic identification of harmful, aggressive, abusive, and offensive language on the web: A survey of technical biases informed by psychology literature

A Balayn, J Yang, Z Szlavik, A Bozzon - ACM Transactions on Social …, 2021 - dl.acm.org
The automatic detection of conflictual languages (harmful, aggressive, abusive, and
offensive languages) is essential to provide a healthy conversation environment on the Web …

The role of human knowledge in explainable AI

A Tocchetti, M Brambilla - Data, 2022 - mdpi.com
As the performance and complexity of machine learning models have grown significantly
over the last years, there has been an increasing need to develop methodologies to …

AI robustness: a human-centered perspective on technological challenges and opportunities

A Tocchetti, L Corti, A Balayn, M Yurrita… - ACM Computing …, 2022 - dl.acm.org
Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness
remains elusive and constitutes a key issue that impedes large-scale adoption. Besides …

What should you know? A human-in-the-loop approach to unknown unknowns characterization in image recognition

S Sharifi Noorian, S Qiu, U Gadiraju, J Yang… - Proceedings of the …, 2022 - dl.acm.org
Unknown unknowns represent a major challenge in reliable image recognition. Existing
methods mainly focus on unknown unknowns identification, leveraging human intelligence …

``It Is a Moving Process": Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine

L Corti, R Oltmans, J Jung, A Balayn… - Proceedings of the CHI …, 2024 - dl.acm.org
Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and
timeliness of their services. There are converging opinions on the need for Explainable AI …

Black-box error diagnosis in Deep Neural Networks for computer vision: a survey of tools

P Fraternali, F Milani, RN Torres… - Neural Computing and …, 2023 - Springer
Abstract The application of Deep Neural Networks (DNNs) to a broad variety of tasks
demands methods for co** with the complex and opaque nature of these architectures …