Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions

S Atakishiyev, M Salameh, H Yao, R Goebel - IEEE Access, 2024 - ieeexplore.ieee.org
Autonomous driving has achieved significant milestones in research and development over
the last two decades. There is increasing interest in the field as the deployment of …

Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness

O Friha, MA Ferrag, B Kantarci… - IEEE Open Journal …, 2024 - ieeexplore.ieee.org
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …

End-to-end autonomous driving: Challenges and frontiers

L Chen, P Wu, K Chitta, B Jaeger… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The autonomous driving community has witnessed a rapid growth in approaches that
embrace an end-to-end algorithm framework, utilizing raw sensor input to generate vehicle …

Drivelm: Driving with graph visual question answering

C Sima, K Renz, K Chitta, L Chen, H Zhang… - … on Computer Vision, 2024 - Springer
We study how vision-language models (VLMs) trained on web-scale data can be integrated
into end-to-end driving systems to boost generalization and enable interactivity with human …

A survey on multimodal large language models for autonomous driving

C Cui, Y Ma, X Cao, W Ye, Y Zhou… - Proceedings of the …, 2024 - openaccess.thecvf.com
With the emergence of Large Language Models (LLMs) and Vision Foundation Models
(VFMs), multimodal AI systems benefiting from large models have the potential to equally …

Lmdrive: Closed-loop end-to-end driving with large language models

H Shao, Y Hu, L Wang, G Song… - Proceedings of the …, 2024 - openaccess.thecvf.com
Despite significant recent progress in the field of autonomous driving modern methods still
struggle and can incur serious accidents when encountering long-tail unforeseen events …

Dolphins: Multimodal language model for driving

Y Ma, Y Cao, J Sun, M Pavone, C **ao - European Conference on …, 2024 - Springer
The quest for fully autonomous vehicles (AVs) capable of navigating complex real-world
scenarios with human-like understanding and responsiveness. In this paper, we introduce …

Lingoqa: Visual question answering for autonomous driving

AM Marcu, L Chen, J Hünermann, A Karnsund… - … on Computer Vision, 2024 - Springer
We introduce LingoQA, a novel dataset and benchmark for visual question answering in
autonomous driving. The dataset contains 28K unique short video scenarios, and 419K …

Embodied understanding of driving scenarios

Y Zhou, L Huang, Q Bu, J Zeng, T Li, H Qiu… - … on Computer Vision, 2024 - Springer
Embodied scene understanding serves as the cornerstone for autonomous agents to
perceive, interpret, and respond to open driving scenarios. Such understanding is typically …

Lampilot: An open benchmark dataset for autonomous driving with language model programs

Y Ma, C Cui, X Cao, W Ye, P Liu, J Lu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Autonomous driving (AD) has made significant strides in recent years. However existing
frameworks struggle to interpret and execute spontaneous user instructions such as" …