Følg
Ryo Yonetani
Ryo Yonetani
Senior Research Scientist at CyberAgent
Verificeret mail på cyberagent.co.jp - Startside
Titel
Citeret af
Citeret af
År
Client selection for federated learning with heterogeneous resources in mobile edge
T Nishio, R Yonetani
ICC 2019-2019 IEEE international conference on communications (ICC), 1-7, 2019
17862019
Hybrid-FL for wireless networks: Cooperative learning mechanism using non-IID data
N Yoshida, T Nishio, M Morikura, K Yamamoto, R Yonetani
ICC 2020-2020 IEEE International Conference On Communications (ICC), 1-7, 2020
270*2020
Future person localization in first-person videos
T Yagi, K Mangalam, R Yonetani, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
2302018
Can eye help you? Effects of visualizing eye fixations on remote collaboration scenarios for physical tasks
K Higuch, R Yonetani, Y Sato
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems …, 2016
1372016
Path planning using neural a* search
R Yonetani, T Taniai, M Barekatain, M Nishimura, A Kanezaki
International conference on machine learning, 12029-12039, 2021
1142021
Degree of interest estimating device and degree of interest estimating method
K Sakata, S Maeda, R Yonetani, H Kawashima, T Hirayama, ...
US Patent 9,538,219, 2017
952017
Recognizing micro-actions and reactions from paired egocentric videos
R Yonetani, KM Kitani, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016
952016
Privacy-preserving visual learning using doubly permuted homomorphic encryption
R Yonetani, V Naresh Boddeti, KM Kitani, Y Sato
Proceedings of the IEEE international conference on computer vision, 2040-2050, 2017
842017
Computational models of human visual attention and their implementations: A survey
A Kimura, R Yonetani, T Hirayama
IEICE TRANSACTIONS on Information and Systems 96 (3), 562-578, 2013
692013
Egoscanning: Quickly scanning first-person videos with egocentric elastic timelines
K Higuchi, R Yonetani, Y Sato
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017
572017
Ego-surfing first-person videos
R Yonetani, KM Kitani, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2015
562015
Decentralized learning of generative adversarial networks from multi-client non-iid data
R Yonetani, T Takahashi, A Hashimoto, Y Ushiku
arXiv preprint arXiv:1905.09684, 2019
462019
Gaze target determination device and gaze target determination method
K Sakata, S Maeda, R Yonetani, H Kawashima, T Hirayama, ...
US Patent 8,678,589, 2014
412014
L2b: Learning to balance the safety-efficiency trade-off in interactive crowd-aware robot navigation
M Nishimura, R Yonetani
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2020
392020
Precise multi-modal in-hand pose estimation using low-precision sensors for robotic assembly
F von Drigalski, K Hayashi, Y Huang, R Yonetani, M Hamaya, K Tanaka, ...
2021 IEEE International Conference on Robotics and Automation (ICRA), 968-974, 2021
332021
Multipolar: Multi-source policy aggregation for transfer reinforcement learning between diverse environmental dynamics
M Barekatain, R Yonetani, M Hamaya
arXiv preprint arXiv:1909.13111, 2019
332019
Multi-mode saliency dynamics model for analyzing gaze and attention
R Yonetani, H Kawashima, T Matsuyama
Proceedings of the symposium on eye tracking research and applications, 115-122, 2012
332012
Mental focus analysis using the spatio-temporal correlation between visual saliency and eye movements
R Yonetani, H Kawashima, T Hirayama, T Matsuyama
Journal of information Processing 20 (1), 267-276, 2012
312012
Support strategies for remote guides in assisting people with visual impairments for effective indoor navigation
R Kamikubo, N Kato, K Higuchi, R Yonetani, Y Sato
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems …, 2020
262020
Prioritized safe interval path planning for multi-agent pathfinding with continuous time on 2D roadmaps
K Kasaura, M Nishimura, R Yonetani
IEEE Robotics and Automation Letters 7 (4), 10494-10501, 2022
192022
Systemet kan ikke foretage handlingen nu. Prøv igen senere.
Artikler 1–20