A review of vision-based traffic semantic understanding in ITSs

J Chen, Q Wang, HH Cheng, W Peng… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
A semantic understanding of road traffic can help people understand road traffic flow
situations and emergencies more accurately and provide a more accurate basis for anomaly …

[HTML][HTML] Vehicle-to-everything (V2X) in the autonomous vehicles domain–A technical review of communication, sensor, and AI technologies for road user safety

SA Yusuf, A Khan, R Souissi - Transportation Research Interdisciplinary …, 2024 - Elsevier
Autonomous vehicles (AV) are rapidly becoming integrated into everyday life, with several
countries anticipating their inclusion in public transport networks in the coming years. Safety …

Crn: Camera radar net for accurate, robust, efficient 3d perception

Y Kim, J Shin, S Kim, IJ Lee… - Proceedings of the …, 2023 - openaccess.thecvf.com
Autonomous driving requires an accurate and fast 3D perception system that includes 3D
object detection, tracking, and segmentation. Although recent low-cost camera-based …

Radar-camera fusion for object detection and semantic segmentation in autonomous driving: A comprehensive review

S Yao, R Guan, X Huang, Z Li, X Sha… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
Driven by deep learning techniques, perception technology in autonomous driving has
developed rapidly in recent years, enabling vehicles to accurately detect and interpret …

Directional connectivity-based segmentation of medical images

Z Yang, S Farsiu - Proceedings of the IEEE/CVF conference …, 2023 - openaccess.thecvf.com
Anatomical consistency in biomarker segmentation is crucial for many medical image
analysis tasks. A promising paradigm for achieving anatomically consistent segmentation …

[HTML][HTML] Towards deep radar perception for autonomous driving: Datasets, methods, and challenges

Y Zhou, L Liu, H Zhao, M López-Benítez, L Yu, Y Yue - Sensors, 2022 - mdpi.com
With recent developments, the performance of automotive radar has improved significantly.
The next generation of 4D radar can achieve imaging capability in the form of high …

Deep depth estimation from thermal image

U Shin, J Park, IS Kweon - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Robust and accurate geometric understanding against adverse weather conditions is one
top prioritized conditions to achieve a high-level autonomy of self-driving cars. However …

Craft: Camera-radar 3d object detection with spatio-contextual fusion transformer

Y Kim, S Kim, JW Choi, D Kum - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Camera and radar sensors have significant advantages in cost, reliability, and maintenance
compared to LiDAR. Existing fusion methods often fuse the outputs of single modalities at …

Depth estimation from camera image and mmwave radar point cloud

AD Singh, Y Ba, A Sarker, H Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present a method for inferring dense depth from a camera image and a sparse noisy
radar point cloud. We first describe the mechanics behind mmWave radar point cloud …

Multi-modal fusion sensing: A comprehensive review of millimeter-wave radar and its integration with other modalities

S Wang, L Mei, R Liu, W Jiang, Z Yin… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Millimeter-wave (mmWave) radar, with its high resolution, sensitivity to micro-vibrations, and
adaptability to various environmental conditions, holds immense potential across multi …