Human uncertainty in concept-based ai systems

KM Collins, M Barker, M Espinosa Zarlenga… - Proceedings of the …, 2023 - dl.acm.org
Placing a human in the loop may help abate the risks of deploying AI systems in safety-
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …

Learning to intervene on concept bottlenecks

D Steinmann, W Stammer, F Friedrich… - arxiv preprint arxiv …, 2023 - arxiv.org
While traditional deep learning models often lack interpretability, concept bottleneck models
(CBMs) provide inherent explanations via their concept representations. Specifically, they …

[HTML][HTML] On the fusion of soft-decision-trees and concept-based models

DM Rodríguez, MP Cuéllar, DP Morales - Applied Soft Computing, 2024 - Elsevier
In the field of eXplainable Artificial Intelligence (XAI), the generation of interpretable models
that are able to match the performance of state-of-the-art deep learning methods is one of …

On the fusion of soft-decision-trees and concept-based models✩

D Morales Rodríguez, MP Cuéllar, DP Morales - 2024 - digibug.ugr.es
In the field of eXplainable Artificial Intelligence (XAI), the generation of interpretable models
that are able to match the performance of state-of-the-art deep learning methods is one of …

Concept logic trees: enabling user interaction for transparent image classification and human-in-the-loop learning

DM Rodríguez, MP Cuéllar, DP Morales - Applied Intelligence, 2024 - Springer
Interpretable deep learning models are increasingly important in domains where transparent
decision-making is required. In this field, the interaction of the user with the model can …

Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency

M Dikter, T Blau, C Baskin - arxiv preprint arxiv:2406.08840, 2024 - arxiv.org
Concept bottleneck models (CBMs) have emerged as critical tools in domains where
interpretability is paramount. These models rely on predefined textual descriptions, referred …

Concept logic trees: enabling user interaction for transparent image classification and human-in-the-loop learning

D Morales Rodríguez, M Pegalajar Cuéllar… - 2024 - digibug.ugr.es
Interpretable deep learning models are increasingly important in domains where transparent
decision-making is required. In this field, the interaction of the user with themodel can …