Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions

S Atakishiyev, M Salameh, H Yao, R Goebel - IEEE Access, 2024 - ieeexplore.ieee.org
Autonomous driving has achieved significant milestones in research and development over
the last two decades. There is increasing interest in the field as the deployment of …

Develo** future human-centered smart cities: Critical analysis of smart city security, Data management, and Ethical challenges

K Ahmad, M Maabreh, M Ghaly, K Khan, J Qadir… - Computer Science …, 2022 - Elsevier
As the globally increasing population drives rapid urbanization in various parts of the world,
there is a great need to deliberate on the future of the cities worth living. In particular, as …

End-to-end autonomous driving: Challenges and frontiers

L Chen, P Wu, K Chitta, B Jaeger… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The autonomous driving community has witnessed a rapid growth in approaches that
embrace an end-to-end algorithm framework, utilizing raw sensor input to generate vehicle …

Drivegpt4: Interpretable end-to-end autonomous driving via large language model

Z Xu, Y Zhang, E **e, Z Zhao, Y Guo… - IEEE Robotics and …, 2024 - ieeexplore.ieee.org
Multimodallarge language models (MLLMs) have emerged as a prominent area of interest
within the research community, given their proficiency in handling and reasoning with non …

Drivelm: Driving with graph visual question answering

C Sima, K Renz, K Chitta, L Chen, H Zhang… - … on Computer Vision, 2024 - Springer
We study how vision-language models (VLMs) trained on web-scale data can be integrated
into end-to-end driving systems to boost generalization and enable interactivity with human …

Driving with llms: Fusing object-level vector modality for explainable autonomous driving

L Chen, O Sinavski, J Hünermann… - … on Robotics and …, 2024 - ieeexplore.ieee.org
Large Language Models (LLMs) have shown promise in the autonomous driving sector,
particularly in generalization and interpretability. We introduce a unique objectlevel …

Drivevlm: The convergence of autonomous driving and large vision-language models

X Tian, J Gu, B Li, Y Liu, Y Wang, Z Zhao… - arxiv preprint arxiv …, 2024 - arxiv.org
A primary hurdle of autonomous driving in urban environments is understanding complex
and long-tail scenarios, such as challenging road conditions and delicate human behaviors …

Language in a bottle: Language model guided concept bottlenecks for interpretable image classification

Y Yang, A Panagopoulou, S Zhou… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Concept Bottleneck Models (CBM) are inherently interpretable models that factor
model decisions into human-readable concepts. They allow people to easily understand …

A survey on multimodal large language models for autonomous driving

C Cui, Y Ma, X Cao, W Ye, Y Zhou… - Proceedings of the …, 2024 - openaccess.thecvf.com
With the emergence of Large Language Models (LLMs) and Vision Foundation Models
(VFMs), multimodal AI systems benefiting from large models have the potential to equally …

Dolphins: Multimodal language model for driving

Y Ma, Y Cao, J Sun, M Pavone, C **ao - European Conference on …, 2024 - Springer
The quest for fully autonomous vehicles (AVs) capable of navigating complex real-world
scenarios with human-like understanding and responsiveness. In this paper, we introduce …