フォロー
Ruihai Wu
Ruihai Wu
確認したメール アドレス: pku.edu.cn - ホームページ
タイトル
引用先
引用先
Unpaired Image-to-Image Translation using Adversarial Consistency Loss
Y Zhao, R Wu, H Dong
European Conference on Computer Vision, 800-815, 2020
1462020
VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects
R Wu*, Y Zhao*, K Mo*, Z Guo, Y Wang, T Wu, Q Fan, X Chen, L Guibas, ...
International Conference on Learning Representations (ICLR) 2022, 2021
962021
AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions
Y Wang*, R Wu*, K Mo*, J Ke, Q Fan, L Guibas, H Dong
ECCV 2022, 2021
612021
Tdapnet: Prototype network with recurrent top-down attention for robust object classification under partial occlusion
M Xiao, A Kortylewski, R Wu, S Qiao, W Shen, A Yuille
European Conference on Computer Vision Workshops, 447-463, 2019
37*2019
DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Object Manipulation
Y Zhao*, R Wu*, Z Chen, Y Zhang, Q Fan, K Mo, H Dong
International Conference on Learning Representations (ICLR) 2023, 2022
33*2022
Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects
C Ning, R Wu, H Lu, K Mo, H Dong
NeurIPS 2023, 2023
302023
Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation
R Wu*, C Ning*, H Dong
ICCV 2023, 2023
262023
Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusions
R Wu, K Cheng, Y Shen, C Ning, G Zhan, H Dong
NeurIPS 2023, 2023
212023
Leveraging SE (3) Equivariance for Learning 3D Geometric Shape Assembly
R Wu, C Tie, Y Du, Y Zhao, H Dong
ICCV 2023, 2023
172023
Articulated object manipulation with coarse-to-fine affordance for mitigating the effect of point cloud noise
S Ling, Y Wang, R Wu, S Wu, Y Zhuang, T Xu, Y Li, C Liu, H Dong
2024 IEEE International Conference on Robotics and Automation (ICRA), 10895 …, 2024
122024
UniDoorManip: Learning Universal Door Manipulation Policy Over Large-scale and Diverse Door Manipulation Environments
Y Li*, X Zhang*, R Wu*, Z Zhang, Y Geng, H Dong, Z He
arXiv preprint arXiv:2403.02604, 2024
102024
UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence
R Wu, H Lu, Y Wang, Y Wang, H Dong
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
92024
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
H Jiang, B Huang, R Wu, Z Li, S Garg, H Nayyeri, S Wang, Y Li
CoRL 2024, 2024
82024
Localize, Assemble, and Predicate: Contextual Object Proposal Embedding for Visual Relation Detection
R Wu, K Xu, C Liu, N Zhuang, Y Mu
AAAI 2020, 12297-12304, 2020
72020
Naturalvlm: Leveraging fine-grained natural language for affordance-guided visual manipulation
R Xu, Y Shen, X Li, R Wu, H Dong
RA-L 2024, 2024
52024
GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation
H Lu, R Wu, Y Li, S Li, Z Zhu, C Ning, Y Shen, L Luo, Y Chen, H Dong
NeurIPS 2024; Spotlight Presentation on ICRA 2024 Workshop on Deformable …, 2024
4*2024
EqvAfford: SE (3) Equivariance for Point-Level Affordance Learning
Y Chen, C Tie, R Wu, H Dong
CVPR 2024 Workshop on Equivariant Vision: From Theory to Practice, 2024
32024
Broadcasting Support Relations Recursively from Local Dynamics for Object Retrieval in Clutters
Y Li*, R Wu*, H Lu, C Ning, Y Shen, G Zhan, H Dong
RSS 2024, 2024
32024
PreAfford: Universal Affordance-Based Pre-Grasping for Diverse Objects and Environments
K Ding, B Chen, R Wu, Y Li, Z Zhang, H Gao, S Li, Y Zhu, G Zhou, H Dong, ...
IROS 2024, 2024
32024
MobileAfford: Mobile Robotic Manipulation through Differentiable Affordance Learning
Y Li, K Cheng, R Wu, Y Shen, K Zhou, H Dong
2nd Workshop on Mobile Manipulation and Embodied Intelligence at ICRA 2024, 2024
32024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20