Human uncertainty in concept-based ai systems
Placing a human in the loop may help abate the risks of deploying AI systems in safety-
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
Learning to intervene on concept bottlenecks
While traditional deep learning models often lack interpretability, concept bottleneck models
(CBMs) provide inherent explanations via their concept representations. Specifically, they …
(CBMs) provide inherent explanations via their concept representations. Specifically, they …
[HTML][HTML] On the fusion of soft-decision-trees and concept-based models
In the field of eXplainable Artificial Intelligence (XAI), the generation of interpretable models
that are able to match the performance of state-of-the-art deep learning methods is one of …
that are able to match the performance of state-of-the-art deep learning methods is one of …
On the fusion of soft-decision-trees and concept-based models✩
D Morales Rodríguez, MP Cuéllar, DP Morales - 2024 - digibug.ugr.es
In the field of eXplainable Artificial Intelligence (XAI), the generation of interpretable models
that are able to match the performance of state-of-the-art deep learning methods is one of …
that are able to match the performance of state-of-the-art deep learning methods is one of …
Concept logic trees: enabling user interaction for transparent image classification and human-in-the-loop learning
Interpretable deep learning models are increasingly important in domains where transparent
decision-making is required. In this field, the interaction of the user with the model can …
decision-making is required. In this field, the interaction of the user with the model can …
Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency
Concept bottleneck models (CBMs) have emerged as critical tools in domains where
interpretability is paramount. These models rely on predefined textual descriptions, referred …
interpretability is paramount. These models rely on predefined textual descriptions, referred …
Concept logic trees: enabling user interaction for transparent image classification and human-in-the-loop learning
D Morales Rodríguez, M Pegalajar Cuéllar… - 2024 - digibug.ugr.es
Interpretable deep learning models are increasingly important in domains where transparent
decision-making is required. In this field, the interaction of the user with themodel can …
decision-making is required. In this field, the interaction of the user with themodel can …