论文标题

通过近似可区分的物理学从视频中学习对象操纵技能

Learning Object Manipulation Skills from Video via Approximate Differentiable Physics

论文作者

Petrik, Vladimir, Qureshi, Mohammad Nomaan, Sivic, Josef, Tapaswi, Makarand

论文摘要

我们的目标是教机器人通过观看单个视频演示来执行简单的对象操纵任务。为了实现这一目标,我们提出了一种优化方法,该方法输出了粗糙且在时间上不断发展的3D场景,以模仿输入视频中所示的动作。与以前的工作类似,可区分的渲染器可确保3D场景和2D视频之间的感知忠诚度。我们的关键新颖性在于包含一种可区分方法来解决一组普通微分方程(ODE),该方程使我们能够大致建模物理定律,例如重力,摩擦,手动对象或对象对象相互作用。这不仅使我们能够显着提高估计的手和物体状态的质量,而且还可以产生物理上可接受的轨迹,这些轨迹可以直接转化为机器人,而无需进行昂贵的强化学习。我们在3D重建任务上评估了我们的方法,该任务由54个视频演示组成,这些视频演示来自9个动作,例如从右到向左或将某物放在某物前。我们的方法将以前的最先进的方法提高了近30%,在涉及两个物体(例如将某物)的物理互动的特别挑战性的动作上表现出了卓越的质量。最后,我们在Franka Emika Panda机器人上展示了博学的技能。

We aim to teach robots to perform simple object manipulation tasks by watching a single video demonstration. Towards this goal, we propose an optimization approach that outputs a coarse and temporally evolving 3D scene to mimic the action demonstrated in the input video. Similar to previous work, a differentiable renderer ensures perceptual fidelity between the 3D scene and the 2D video. Our key novelty lies in the inclusion of a differentiable approach to solve a set of Ordinary Differential Equations (ODEs) that allows us to approximately model laws of physics such as gravity, friction, and hand-object or object-object interactions. This not only enables us to dramatically improve the quality of estimated hand and object states, but also produces physically admissible trajectories that can be directly translated to a robot without the need for costly reinforcement learning. We evaluate our approach on a 3D reconstruction task that consists of 54 video demonstrations sourced from 9 actions such as pull something from right to left or put something in front of something. Our approach improves over previous state-of-the-art by almost 30%, demonstrating superior quality on especially challenging actions involving physical interactions of two objects such as put something onto something. Finally, we showcase the learned skills on a Franka Emika Panda robot.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源