Pathfinder: Designing a map-less navigation system for blind people in unfamiliar buildings

M Kuribayashi, T Ishihara, D Sato… - Proceedings of the …, 2023 - dl.acm.org
Indoor navigation systems with prebuilt maps have shown great potential in navigating blind
people even in unfamiliar buildings. However, blind people cannot always benefit from them …

Vision-based road-following using results of semantic segmentation for autonomous navigation

R Miyamoto, Y Nakamura, M Adachi… - 2019 IEEE 9th …, 2019 - ieeexplore.ieee.org
Recent research into autonomous navigation of a robot have used accurate and dense three-
dimensional sensors such as 3DLiDAR and RADAR for map building and localization …

Visual navigation based on semantic segmentation using only a monocular camera as an external sensor

R Miyamoto, M Adachi, H Ishida… - Journal of Robotics …, 2020 - jstage.jst.go.jp
The most popular external sensor for robots capable of autonomous movement is 3D LiDAR.
However, cameras are typically installed on robots that operate in environments where …

Visual navigation using a webcam based on semantic segmentation for indoor robots

M Adachi, S Shatari, R Miyamoto - 2019 15th International …, 2019 - ieeexplore.ieee.org
The realization of a moving robot that can autonomously work in an actual environment has
become important. A three-dimensional dense map that was created using three …

Practical Implementation of Visual Navigation Based on Semantic Segmentation for Human-Centric Environments

M Adachi, K Honda, J Xue, H Sudo, Y Ueda… - Journal of Robotics …, 2023 - jstage.jst.go.jp
This study focuses on visual navigation methods for autonomous mobile robots based on
semantic segmentation results. The challenge is to perform the expected actions without …

Turning at intersections using virtual lidar signals obtained from a segmentation result

M Adachi, K Honda, R Miyamoto - Journal of Robotics and …, 2023 - jstage.jst.go.jp
We implemented a novel visual navigation method for autonomous mobile robots, which is
based on the results of semantic segmentation. The novelty of this method lies in its control …

Accuracy improvement of semantic segmentation trained with data generated from a 3d model by histogram matching using suitable references

M Adachi, H Komatsuzaki, M Wada… - … on Systems, Man, and …, 2022 - ieeexplore.ieee.org
Visual navigation based on the results of semantic segmentation requires high classification
accuracy. Previous research has proven that a classifier of semantic segmentation trained …

[PDF][PDF] Feasibility study of intersection detection and recognition using a single shot image for robot navigation

T Watanabe, K Matsutani, M Adachi, T Oki… - Journal of Image and …, 2021 - joig.net
This study is an attempt to actualize the autonomous movement of a robot using a navigation
system with a camera, instead of expensive external sensors such as light detection and …

Model-based estimation of road direction in urban scenes using virtual lidar signals

M Adachi, R Miyamoto - 2020 IEEE International Conference …, 2020 - ieeexplore.ieee.org
Several proposed schemes have shown remarkable results in autonomous navigation in
actual scenes. However, most schemes are dependent on expensive three-dimensional …

Spannotation: Enhancing Semantic Segmentation for Autonomous Navigation with Efficient Image Annotation

SO Folorunsho, WR Norris - arxiv preprint arxiv:2402.18084, 2024 - arxiv.org
Spannotation is an open source user-friendly tool developed for image annotation for
semantic segmentation specifically in autonomous navigation tasks. This study provides an …