Human-in-the-loop reinforcement learning: A survey and position on requirements, challenges, and opportunities

CO Retzlaff, S Das, C Wayllace, P Mousavi… - Journal of Artificial …, 2024 - jair.org
Artificial intelligence (AI) and especially reinforcement learning (RL) have the potential to
enable agents to learn and perform tasks autonomously with superhuman performance …

From industry 5.0 to forestry 5.0: Bridging the gap with human-centered artificial intelligence

A Holzinger, J Schweier, C Gollob, A Nothdurft… - Current Forestry …, 2024 - Springer
Abstract Purpose of the Review Recent technological innovations in Artificial Intelligence
(AI) have successfully revolutionized many industrial processes, enhancing productivity and …

[HTML][HTML] Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions

L Longo, M Brcic, F Cabitza, J Choi, R Confalonieri… - Information …, 2024 - Elsevier
Understanding black box models has become paramount as systems based on opaque
Artificial Intelligence (AI) continue to flourish in diverse real-world applications. In response …

Impact of generative artificial intelligence models on the performance of citizen data scientists in retail firms

RA Abumalloh, M Nilashi, KB Ooi, GWH Tan… - Computers in …, 2024 - Elsevier
Abstract Generative Artificial Intelligence (AI) models serve as powerful tools for
organizations aiming to integrate advanced data analysis and automation into their …

Explainability pitfalls: Beyond dark patterns in explainable AI

U Ehsan, MO Riedl - Patterns, 2024 - cell.com
To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful
effects is important. In this paper, we address an important yet unarticulated type of negative …

Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning

E ŞAHiN, NN Arslan, D Özdemir - Neural Computing and Applications, 2024 - Springer
Deep learning models have revolutionized numerous fields, yet their decision-making
processes often remain opaque, earning them the characterization of “black-box” models …

[HTML][HTML] CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks

JM Metsch, A Saranti, A Angerschmid, B Pfeifer… - Journal of Biomedical …, 2024 - Elsevier
Background: Lack of trust in artificial intelligence (AI) models in medicine is still the key
blockage for the use of AI in clinical decision support systems (CDSS). Although AI models …

Trustworthy multi-phase liver tumor segmentation via evidence-based uncertainty

C Hu, T **a, Y Cui, Q Zou, Y Wang, W **ao, S Ju… - … Applications of Artificial …, 2024 - Elsevier
Multi-phase liver contrast-enhanced computed tomography (CECT) images convey the
complementary multi-phase information for liver tumor segmentation (LiTS), which are …

Transformer models in biomedicine

S Madan, M Lentzen, J Brandt, D Rueckert… - BMC Medical Informatics …, 2024 - Springer
Deep neural networks (DNN) have fundamentally revolutionized the artificial intelligence
(AI) field. The transformer model is a type of DNN that was originally used for the natural …

Sensors for digital transformation in smart forestry

F Ehrlich-Sommer, F Hoenigsberger, C Gollob… - Sensors, 2024 - mdpi.com
Smart forestry, an innovative approach leveraging artificial intelligence (AI), aims to enhance
forest management while minimizing the environmental impact. The efficacy of AI in this …