Vision-based robotic gras** from object localization, object pose estimation to grasp estimation for parallel grippers: a review

G Du, K Wang, S Lian, K Zhao - Artificial Intelligence Review, 2021 - Springer
This paper presents a comprehensive survey on vision-based robotic gras**. We
conclude three key tasks during vision-based robotic gras**, which are object localization …

Review of deep reinforcement learning-based object gras**: Techniques, open challenges, and recommendations

MQ Mohammed, KL Chung, CS Chyi - IEEE Access, 2020 - ieeexplore.ieee.org
The motivation behind our work is to review and analyze the most relevant studies on deep
reinforcement learning-based object manipulation. Various studies are examined through a …

Cliport: What and where pathways for robotic manipulation

M Shridhar, L Manuelli, D Fox - Conference on robot learning, 2022 - proceedings.mlr.press
How can we imbue robots with the ability to manipulate objects precisely but also to reason
about them in terms of abstract concepts? Recent works in manipulation have shown that …

Transporter networks: Rearranging the visual world for robotic manipulation

A Zeng, P Florence, J Tompson… - … on Robot Learning, 2021 - proceedings.mlr.press
Robotic manipulation can be formulated as inducing a sequence of spatial displacements:
where the space being moved can encompass an object, part of an object, or end effector. In …

Ffb6d: A full flow bidirectional fusion network for 6d pose estimation

Y He, H Huang, H Fan, Q Chen… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
In this work, we present FFB6D, a full flow bidirectional fusion network designed for 6D pose
estimation from a single RGBD image. Our key insight is that appearance information in the …

Gen6d: Generalizable model-free 6-dof object pose estimation from rgb images

Y Liu, Y Wen, S Peng, C Lin, X Long, T Komura… - … on Computer Vision, 2022 - Springer
In this paper, we present a generalizable model-free 6-DoF object pose estimator called
Gen6D. Existing generalizable pose estimators either need the high-quality object models or …

Transcg: A large-scale real-world dataset for transparent object depth completion and a gras** baseline

H Fang, HS Fang, S Xu, C Lu - IEEE Robotics and Automation …, 2022 - ieeexplore.ieee.org
Transparent objects are common in our daily life and frequently handled in the automated
production line. Robust vision-based robotic gras** and manipulation for these objects …

Mira: Mental imagery for robotic affordances

YC Lin, P Florence, A Zeng, JT Barron… - … on Robot Learning, 2023 - proceedings.mlr.press
Humans form mental images of 3D scenes to support counterfactual imagination, planning,
and motor control. Our abilities to predict the appearance and affordance of the scene from …

Uni6d: A unified cnn framework without projection breakdown for 6d pose estimation

X Jiang, D Li, H Chen, Y Zheng… - Proceedings of the …, 2022 - openaccess.thecvf.com
As RGB-D sensors become more affordable, using RGB-D images to obtain high-accuracy
6D pose estimation results becomes a better option. State-of-the-art approaches typically …

RGB-D local implicit function for depth completion of transparent objects

L Zhu, A Mousavian, Y **ang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Majority of the perception methods in robotics require depth information provided by RGB-D
cameras. However, standard 3D sensors fail to capture depth of transparent objects due to …