论文标题

通过基于图像的掌握建议和3D重建,机器人抓握

Robotic Grasping through Combined Image-Based Grasp Proposal and 3D Reconstruction

论文作者

Yang, Daniel, Tosun, Tarik, Eisner, Ben, Isler, Volkan, Lee, Daniel

论文摘要

我们提出了一种新颖的方法,可以使用学习的GRASP提案网络和学习的3D形状重建网络进行机器人掌握计划。我们的系统从目标对象的单个RGB-D图像中生成6-DOF抓取,该图像是两个网络的输入。通过使用几何重建来完善由GRASP提案网络产生的候选掌握,即使在输入图像中看不见对象上的抓地位置,我们的系统也能够准确地掌握已知和未知对象。 本文介绍了构成我们系统的网络体系结构,培训程序和掌握改进方法。实验证明了我们系统对掌握已知和未知物体的功效(在物理机器人环境中的成功率为91%,在模拟环境中的成功率为84%)。我们还进行了消融研究,以表明将学习的掌握建议与掌握的几何重建相结合的好处,并表明我们的系统在掌握任务中的表现优于几个基线。

We present a novel approach to robotic grasp planning using both a learned grasp proposal network and a learned 3D shape reconstruction network. Our system generates 6-DOF grasps from a single RGB-D image of the target object, which is provided as input to both networks. By using the geometric reconstruction to refine the the candidate grasp produced by the grasp proposal network, our system is able to accurately grasp both known and unknown objects, even when the grasp location on the object is not visible in the input image. This paper presents the network architectures, training procedures, and grasp refinement method that comprise our system. Experiments demonstrate the efficacy of our system at grasping both known and unknown objects (91% success rate in a physical robot environment, 84% success rate in a simulated environment). We additionally perform ablation studies that show the benefits of combining a learned grasp proposal with geometric reconstruction for grasping, and also show that our system outperforms several baselines in a grasping task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源