Maniskill2: A unified benchmark for generalizable manipulation skills J Gu, F Xiang, X Li, Z Ling, X Liu, T Mu, Y Tang, S Tao, X Wei, Y Yao, ... International Conference on Learning Representations (ICLR) 2023, 2023 | 98 | 2023 |
Md-splatting: Learning metric deformation from 4d gaussians in highly deformable scenes BP Duisterhof, Z Mandi, Y Yao, JW Liu, MZ Shou, S Song, J Ichnowski arXiv preprint arXiv:2312.00583 2 (3), 2023 | 31 | 2023 |
On the efficacy of 3d point cloud reinforcement learning Z Ling, Y Yao, X Li, H Su arXiv preprint arXiv:2306.06799, 2023 | 13 | 2023 |
Deformgs: Scene flow in highly deformable scenes for deformable object manipulation BP Duisterhof, Z Mandi, Y Yao, JW Liu, J Seidenschwarz, MZ Shou, ... arXiv preprint arXiv:2312.00583, 2023 | 5 | 2023 |
When should we prefer state-to-visual dagger over visual reinforcement learning? T Mu, Z Li, SW Strzelecki, X Yuan, Y Yao, L Liang, H Su The 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025), 2024 | 1 | 2024 |
Automating Robot Failure Recovery Using Vision-Language Models With Optimized Prompts H Chen, Y Yao, R Liu, C Liu, J Ichnowski arXiv preprint arXiv:2409.03966, 2024 | 1 | 2024 |
Soft Robotic Dynamic In-Hand Pen Spinning Y Yao, U Yoo, J Oh, CG Atkeson, J Ichnowski arXiv preprint arXiv:2411.12734, 2024 | | 2024 |