Prati
Zhecheng Yuan
Zhecheng Yuan
Ostala imena袁 哲诚
Potvrđena adresa e-pošte na mails.tsinghua.edu.cn - Početna stranica
Naslov
Citirano
Citirano
Godina
Pre-trained image encoder for generalizable visual reinforcement learning
Z Yuan, Z Xue, B Yuan, X Wang, Y Wu, Y Gao, H Xu
Advances in Neural Information Processing Systems 35, 13022-13037, 2022
742022
Gensim: Generating robotic simulation tasks via large language models
L Wang, Y Ling, Z Yuan, M Shridhar, C Bao, Y Qin, B Wang, H Xu, ...
arXiv preprint arXiv:2310.01361, 2023
692023
On pre-training for visuo-motor control: Revisiting a learning-from-scratch baseline
N Hansen, Z Yuan, Y Ze, T Mu, A Rajeswaran, H Su, H Xu, X Wang
arXiv preprint arXiv:2212.05749, 2022
57*2022
A comprehensive survey of data augmentation in visual reinforcement learning
G Ma, Z Wang, Z Yuan, X Wang, B Yuan, D Tao
arXiv preprint arXiv:2210.04561, 2022
352022
Don't touch what matters: Task-aware lipschitz data augmentation for visual reinforcement learning
Z Yuan, G Ma, Y Mu, B Xia, B Yuan, X Wang, P Luo, H Xu
arXiv preprint arXiv:2202.09982, 2022
352022
Drm: Mastering visual reinforcement learning through dormant ratio minimization
G Xu, R Zheng, Y Liang, X Wang, Z Yuan, T Ji, Y Luo, X Liu, J Yuan, ...
arXiv preprint arXiv:2310.19668, 2023
282023
Useek: Unsupervised se (3)-equivariant 3d keypoints for generalizable manipulation
Z Xue, Z Yuan, J Wang, X Wang, Y Gao, H Xu
2023 IEEE International Conference on Robotics and Automation (ICRA), 1715-1722, 2023
262023
H-InDex: Visual reinforcement learning with hand-informed representations for dexterous manipulation
Y Ze, Y Liu, R Shi, J Qin, Z Yuan, J Wang, H Xu
Advances in Neural Information Processing Systems 36, 74394-74409, 2023
192023
Rl-vigen: A reinforcement learning benchmark for visual generalization
Z Yuan, S Yang, P Hua, C Chang, K Hu, H Xu
Advances in Neural Information Processing Systems 36, 6720-6747, 2023
142023
Learning to manipulate anywhere: A visual generalizable framework for reinforcement learning
Z Yuan, T Wei, S Cheng, G Zhang, Y Chen, H Xu
arXiv preprint arXiv:2407.15815, 2024
122024
Roboscript: Code generation for free-form manipulation tasks across real and simulation
J Chen, Y Mu, Q Yu, T Wei, S Wu, Z Yuan, Z Liang, C Yang, K Zhang, ...
arXiv preprint arXiv:2402.14623, 2024
102024
Roboduet: A framework affording mobile-manipulation and crossembodiment
G Pan, Q Ben, Z Yuan, G Jiang, Y Ji, J Pang, H Liu, H Xu
arXiv preprint arXiv:2403.17367 6, 2024
82024
Generalizable visual reinforcement learning with segment anything model
Z Wang, Y Ze, Y Sun, Z Yuan, H Xu
arXiv preprint arXiv:2312.17116, 2023
62023
Extraneousness-Aware Imitation Learning
RC Zheng, K Hu, Z Yuan, B Chen, H Xu
2023 IEEE International Conference on Robotics and Automation (ICRA), 2973-2979, 2023
22023
DenseMatcher: Learning 3D Semantic Correspondence for Category-Level Manipulation from a Single Demo
J Zhu, Y Ju, J Zhang, M Wang, Z Yuan, K Hu, H Xu
arXiv preprint arXiv:2412.05268, 2024
12024
DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning
Z Xue, S Deng, Z Chen, Y Wang, Z Yuan, H Xu
arXiv preprint arXiv:2502.16932, 2025
2025
DOGlove: Dexterous Manipulation with a Low-Cost Open-Source Haptic Force Feedback Glove
H Zhang, S Hu, Z Yuan, H Xu
arXiv preprint arXiv:2502.07730, 2025
2025
RoboDuet: Whole-body Legged Loco-Manipulation with Cross-Embodiment Deployment
G Pan, Q Ben, Z Yuan, G Jiang, Y Ji, S Li, J Pang, H Liu, H Xu
arXiv preprint arXiv:2403.17367, 2024
2024
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–18