论文标题

活跃的视觉触觉互动机器人感知,以量估计精确的物体姿势估计

Active Visuo-Tactile Interactive Robotic Perception for Accurate Object Pose Estimation in Dense Clutter

论文作者

Murali, Prajval Kumar, Dutta, Anirvan, Gentner, Michael, Burdet, Etienne, Dahiya, Ravinder, Kaboli, Mohsen

论文摘要

这项工作为机器人系统提供了一种新型的基于Visuo-Tactile的活跃框架,以准确估计杂乱的环境中物体的姿势。场景表示是使用新颖的整理图(DG)得出的,该图描述了场景中的对象之间的关系,以通过利用语义分割和掌握提供网络来整理。该图公式使机器人可以自主选择要删除的下一个最佳对象以及要执行的最佳动作(前或非毛刺)来有效地整理工作空间。此外,我们提出了一种新型的翻译不变的四元化滤波器(TIQF),以进行主动视力和基于主动触觉的姿势估计。通过最大化预期信息增益来选择主动视觉和主动触觉点。我们在一个系统上评估了我们在密集的杂物对象的随机场景上协调的系统上的拟议框架,并通过静态视觉和主动视觉估计进行消融研究,并将基于视觉的估计在整理中作为基础线。与主动视力基线相比,我们提出的主动视觉互动互动感知框架的姿势准确性提高了36%。

This work presents a novel active visuo-tactile based framework for robotic systems to accurately estimate pose of objects in dense cluttered environments. The scene representation is derived using a novel declutter graph (DG) which describes the relationship among objects in the scene for decluttering by leveraging semantic segmentation and grasp affordances networks. The graph formulation allows robots to efficiently declutter the workspace by autonomously selecting the next best object to remove and the optimal action (prehensile or non-prehensile) to perform. Furthermore, we propose a novel translation-invariant Quaternion filter (TIQF) for active vision and active tactile based pose estimation. Both active visual and active tactile points are selected by maximizing the expected information gain. We evaluate our proposed framework on a system with two robots coordinating on randomized scenes of dense cluttered objects and perform ablation studies with static vision and active vision based estimation prior and post decluttering as baselines. Our proposed active visuo-tactile interactive perception framework shows upto 36% improvement in pose accuracy compared to the active vision baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源