Vision-based robotic gras** from object localization, object pose estimation to grasp estimation for parallel grippers: a review

G Du, K Wang, S Lian, K Zhao - Artificial Intelligence Review, 2021‏ - Springer
This paper presents a comprehensive survey on vision-based robotic gras**. We
conclude three key tasks during vision-based robotic gras**, which are object localization …

Review of deep learning methods in robotic grasp detection

S Caldera, A Rassau, D Chai - Multimodal Technologies and Interaction, 2018‏ - mdpi.com
For robots to attain more general-purpose utility, gras** is a necessary skill to master.
Such general-purpose robots may use their perception abilities to visually identify grasps for …

Graspnet-1billion: A large-scale benchmark for general object gras**

HS Fang, C Wang, M Gou, C Lu - Proceedings of the IEEE …, 2020‏ - openaccess.thecvf.com
Object gras** is critical for many applications, which is also a challenging computer vision
problem. However, for cluttered scene, current researches suffer from the problems of …

Antipodal robotic gras** using generative residual convolutional neural network

S Kumra, S Joshi, F Sahin - 2020 IEEE/RSJ International …, 2020‏ - ieeexplore.ieee.org
In this paper, we present a modular robotic system to tackle the problem of generating and
performing antipodal robotic grasps for unknown objects from the n-channel image of the …

Real-world multiobject, multigrasp detection

FJ Chu, R Xu, PA Vela - IEEE Robotics and Automation Letters, 2018‏ - ieeexplore.ieee.org
A deep learning architecture is proposed to predict graspable locations for robotic
manipulation. It considers situations where no, one, or multiple object (s) are seen. By …

Acronym: A large-scale grasp dataset based on simulation

C Eppner, A Mousavian, D Fox - 2021 IEEE International …, 2021‏ - ieeexplore.ieee.org
We introduce ACRONYM, a dataset for robot grasp planning based on physics simulation.
The dataset contains 17.7 M parallel-jaw grasps, spanning 8872 objects from 262 different …

When transformer meets robotic gras**: Exploits context for efficient grasp detection

S Wang, Z Zhou, Z Kan - IEEE robotics and automation letters, 2022‏ - ieeexplore.ieee.org
In this letter, we present a transformer-based architecture, namely TF-Grasp, for robotic
grasp detection. The developed TF-Grasp framework has two elaborate designs making it …

Gaussiangrasper: 3d language gaussian splatting for open-vocabulary robotic gras**

Y Zheng, X Chen, Y Zheng, S Gu… - IEEE Robotics and …, 2024‏ - ieeexplore.ieee.org
Constructing a 3D scene capable of accommodating open-ended language queries, is a
pivotal pursuit in the domain of robotics, which facilitates robots in executing object …

Fully convolutional grasp detection network with oriented anchor box

X Zhou, X Lan, H Zhang, Z Tian… - 2018 IEEE/RSJ …, 2018‏ - ieeexplore.ieee.org
In this paper, we present a real-time approach to predict multiple gras** poses for a
parallel-plate robotic gripper using RGB images. A model with oriented anchor box …

Object detection recognition and robot gras** based on machine learning: A survey

Q Bai, S Li, J Yang, Q Song, Z Li, X Zhang - IEEE access, 2020‏ - ieeexplore.ieee.org
With the rapid development of machine learning, its powerful function in the machine vision
field is increasingly reflected. The combination of machine vision and robotics to achieve the …