[HTML][HTML] The role of human knowledge in explainable AI

A Tocchetti, M Brambilla - Data, 2022 - mdpi.com
As the performance and complexity of machine learning models have grown significantly
over the last years, there has been an increasing need to develop methodologies to …

Ai robustness: a human-centered perspective on technological challenges and opportunities

A Tocchetti, L Corti, A Balayn, M Yurrita… - ACM Computing …, 2022 - dl.acm.org
Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness
remains elusive and constitutes a key issue that impedes large-scale adoption. Besides …

Dosa: A dataset of social artifacts from different indian geographical subcultures

A Seth, S Ahuja, K Bali, S Sitaram - arxiv preprint arxiv:2403.14651, 2024 - arxiv.org
Generative models are increasingly being used in various applications, such as text
generation, commonsense reasoning, and question-answering. To be effective globally …

Akal Badi ya Bias: An Exploratory Study of Gender Bias in Hindi Language Technology

R Hada, S Husain, V Gumma, H Diddee… - Proceedings of the …, 2024 - dl.acm.org
Existing research in measuring and mitigating gender bias predominantly centers on
English, overlooking the intricate challenges posed by non-English languages and the …

Opening the Analogical Portal to Explainability: Can Analogies Help Laypeople in AI-assisted Decision Making?

G He, A Balayn, S Buijsman, J Yang… - Journal of Artificial …, 2024 - jair.org
Abstract Concepts are an important construct in semantics, based on which humans
understand the world with various levels of abstraction. With the recent advances in …

It is like finding a polar bear in the savannah! concept-level ai explanations with analogical inference from commonsense knowledge

G He, A Balayn, S Buijsman, J Yang… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
With recent advances in explainable artificial intelligence (XAI), researchers have started to
pay attention to concept-level explanations, which explain model predictions with a high …

Eye into AI: Evaluating the Interpretability of Explainable AI Techniques through a Game with a Purpose

K Morrison, M Jain, J Hammer, A Perer - Proceedings of the ACM on …, 2023 - dl.acm.org
Recent developments in explainable AI (XAI) aim to improve the transparency of black-box
models. However, empirically evaluating the interpretability of these XAI techniques is still …

The state of pilot study reporting in crowdsourcing: A reflection on best practices and guidelines

J Oppenlaender, T Abbas, U Gadiraju - … of the ACM on Human-Computer …, 2024 - dl.acm.org
Pilot studies are an essential cornerstone of the design of crowdsourcing campaigns, yet
they are often only mentioned in passing in the scholarly literature. A lack of details …

Nothing Comes Without Its World–Practical Challenges of Aligning LLMs to Situated Human Values through RLHF

A Arzberger, S Buijsman, ML Lupetti… - Proceedings of the …, 2024 - ojs.aaai.org
Work on value alignment aims to ensure that human values are respected by AI systems.
However, existing approaches tend to rely on universal framings of human values that …

Factory Operators' Perspectives on Cognitive Assistants for Knowledge Sharing: Challenges, Risks, and Impact on Work

SK Freire, T He, C Wang, E Niforatos… - arxiv preprint arxiv …, 2024 - arxiv.org
In the shift towards human-centered manufacturing, our two-year longitudinal study
investigates the real-world impact of deploying Cognitive Assistants (CAs) in factories. The …