[HTML][HTML] External multi-modal imaging sensor calibration for sensor fusion: A review

Z Qiu, J Martínez-Sánchez, P Arias-Sánchez… - Information Fusion, 2023 - Elsevier
Multi-modal data fusion has gained popularity due to its diverse applications, leading to an
increased demand for external sensor calibration. Despite several proven calibration …

M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots

J Yin, A Li, T Li, W Yu, D Zou - IEEE Robotics and Automation …, 2021 - ieeexplore.ieee.org
We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full
sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera …

Multi-modal sensor fusion for auto driving perception: A survey

K Huang, B Shi, X Li, X Li, S Huang, Y Li - ar**: a survey of the current research landscape
PY Lajoie, B Ramtoula, F Wu, G Beltrame - arxiv preprint arxiv …, 2021 - arxiv.org
Motivated by the tremendous progress we witnessed in recent years, this paper presents a
survey of the scientific literature on the topic of Collaborative Simultaneous Localization and …

Neural lighting simulation for urban scenes

A Pun, G Sun, J Wang, Y Chen… - Advances in …, 2023 - proceedings.neurips.cc
Different outdoor illumination conditions drastically alter the appearance of urban scenes,
and they can harm the performance of image-based robot perception systems if not seen …

Deep multi-task learning for joint localization, perception, and prediction

J Phillips, J Martinez, IA Bârsan… - Proceedings of the …, 2021 - openaccess.thecvf.com
Over the last few years, we have witnessed tremendous progress on many subtasks of
autonomous driving including perception, motion forecasting, and motion planning …

Cross-view matching for vehicle localization by learning geographically local representations

Z **a, O Booij, M Manfredi… - IEEE Robotics and …, 2021 - ieeexplore.ieee.org
Cross-view matching aims to learn a shared image representation between ground-level
images and satellite or aerial images at the same locations. In robotic vehicles, matching a …

3D LiDAR and monocular camera calibration: A Review

H Zhang, S Li, X Zhu, H Chen, W Yao - IEEE Sensors Journal, 2025 - ieeexplore.ieee.org
Cameras and LiDAR sensors have been extensively utilized in autonomous systems to
enhance perception accuracy and robustness, owing to the highly complementary nature of …

Jist: Joint image and sequence training for sequential visual place recognition

G Berton, G Trivigno, B Caputo… - IEEE Robotics and …, 2023 - ieeexplore.ieee.org
Visual Place Recognition aims at recognizing previously visited places by relying on visual
clues, and it is used in robotics applications for SLAM and localization. Since typically a …

Towards robust and accurate cooperative state estimation for multiple rigid bodies

Y Wang, P Sun, Z Wang - IEEE Transactions on Vehicular …, 2023 - ieeexplore.ieee.org
Estimating the state (ie, determining the position and orientation) of a rigid body (RB) is a
key enabler technology for many applications (eg, vehicle motion control and autonomous …