Deep learning approaches to grasp synthesis: A review

R Newbury, M Gu, L Chumbley… - IEEE Transactions …, 2023‏ - ieeexplore.ieee.org
Gras** is the process of picking up an object by applying forces and torques at a set of
contacts. Recent advances in deep learning methods have allowed rapid progress in robotic …

[HTML][HTML] Artificial Intelligence in manufacturing: State of the art, perspectives, and future directions

RX Gao, J Krüger, M Merklein, HC Möhring, J Váncza - CIRP Annals, 2024‏ - Elsevier
Inspired by the natural intelligence of humans and bio-evolution, Artificial Intelligence (AI)
has seen accelerated growth since the beginning of the 21st century. Successful AI …

Foundationpose: Unified 6d pose estimation and tracking of novel objects

B Wen, W Yang, J Kautz… - Proceedings of the IEEE …, 2024‏ - openaccess.thecvf.com
We present FoundationPose a unified foundation model for 6D object pose estimation and
tracking supporting both model-based and model-free setups. Our approach can be instantly …

Bundlesdf: Neural 6-dof tracking and 3d reconstruction of unknown objects

B Wen, J Tremblay, V Blukis, S Tyree… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
We present a near real-time (10Hz) method for 6-DoF tracking of an unknown object from a
monocular RGBD video sequence, while simultaneously performing neural 3D …

Megapose: 6d pose estimation of novel objects via render & compare

Y Labbé, L Manuelli, A Mousavian, S Tyree… - arxiv preprint arxiv …, 2022‏ - arxiv.org
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects
unseen during training. At inference time, the method only assumes knowledge of (i) a …

Language-driven grasp detection

AD Vuong, MN Vu, B Huang… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
Grasp detection is a persistent and intricate challenge with various industrial applications.
Recently many methods and datasets have been proposed to tackle the grasp detection …

You only demonstrate once: Category-level manipulation from single visual demonstration

B Wen, W Lian, K Bekris, S Schaal - arxiv preprint arxiv:2201.12716, 2022‏ - arxiv.org
Promising results have been achieved recently in category-level manipulation that
generalizes across object instances. Nevertheless, it often requires expensive real-world …

One-shot transfer of affordance regions? affcorrs!

D Hadjivelichkov, S Zwane, L Agapito… - … on Robot Learning, 2023‏ - proceedings.mlr.press
In this work, we tackle one-shot visual search of object parts. Given a single reference image
of an object with annotated affordance regions, we segment semantically corresponding …

Grasp-anything: Large-scale grasp dataset from foundation models

AD Vuong, MN Vu, H Le, B Huang… - … on Robotics and …, 2024‏ - ieeexplore.ieee.org
Foundation models such as ChatGPT have made significant strides in robotic tasks due to
their universal representation of real-world domains. In this paper, we leverage foundation …

SimPLE, a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects

M Bauza, A Bronars, Y Hou, I Taylor… - Science Robotics, 2024‏ - science.org
Existing robotic systems have a tension between generality and precision. Deployed
solutions for robotic manipulation tend to fall into the paradigm of one robot solving a single …