Multi-modal machine learning in engineering design: A review and future directions

B Song, R Zhou, F Ahmed - … of Computing and …, 2024 - asmedigitalcollection.asme.org
In the rapidly advancing field of multi-modal machine learning (MMML), the convergence of
multiple data modalities has the potential to reshape various applications. This paper …

[PDF][PDF] Better metrics for evaluating explainable artificial intelligence

A Rosenfeld - Proceedings of the 20th international conference …, 2021 - researchgate.net
This paper presents objective metrics for how explainable artificial intelligence (XAI) can be
quantified. Through an overview of current trends, we show that many explanations are …

A design space for human sensor and actuator focused in-vehicle interaction based on a systematic literature review

P Jansen, M Colley, E Rukzio - Proceedings of the ACM on Interactive …, 2022 - dl.acm.org
Automotive user interfaces constantly change due to increasing automation, novel features,
additional applications, and user demands. While in-vehicle interaction can utilize numerous …

Learning from active human involvement through proxy value propagation

ZM Peng, W Mo, C Duan, Q Li… - Advances in neural …, 2024 - proceedings.neurips.cc
Learning from active human involvement enables the human subject to actively intervene
and demonstrate to the AI agent during training. The interaction and corrective feedback …

What and When to Explain? On-road Evaluation of Explanations in Highly Automated Vehicles

G Kim, D Yeo, T Jo, D Rus, SJ Kim - … of the ACM on Interactive, Mobile …, 2023 - dl.acm.org
Explanations in automated vehicles help passengers understand the vehicle's state and
capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE …

Ic3m: In-car multimodal multi-object monitoring for abnormal status of both driver and passengers

Z Fang, Z Lin, S Hu, H Cao, Y Deng, X Chen… - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, in-car monitoring has emerged as a promising technology for detecting early-
stage abnormal status of the driver and providing timely alerts to prevent traffic accidents …

Mumu: Cooperative multitask learning-based guided multimodal fusion

MM Islam, T Iqbal - Proceedings of the AAAI conference on artificial …, 2022 - ojs.aaai.org
Multimodal sensors (visual, non-visual, and wearable) can provide complementary
information to develop robust perception systems for recognizing activities accurately …

Autovis: Enabling mixed-immersive analysis of automotive user interface interaction studies

P Jansen, J Britten, A Häusele… - Proceedings of the …, 2023 - dl.acm.org
Automotive user interface (AUI) evaluation becomes increasingly complex due to novel
interaction modalities, driving automation, heterogeneous data, and dynamic environmental …

Takeover quality prediction based on driver physiological state of different cognitive tasks in conditionally automated driving

J Zhu, Y Ma, Y Zhang, Y Zhang, C Lv - Advanced Engineering Informatics, 2023 - Elsevier
In conditionally automated driving, traffic safety problems would occur if the driver does not
properly take over the control authority when the request of automated system arises …

An analysis of physiological responses as indicators of driver takeover readiness in conditionally automated driving

M Deng, A Gluck, Y Zhao, D Li, CC Menassa… - Accident Analysis & …, 2024 - Elsevier
By the year 2045, it is projected that Autonomous Vehicles (AVs) will make up half of the
new vehicle market. Successful adoption of AVs can reduce drivers' stress and fatigue, curb …