论文标题

MultiaCT:多个动作标签的长期3D人类运动产生

MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels

论文作者

Lee, Taeryung, Moon, Gyeongsik, Lee, Kyoung Mu

论文摘要

我们解决了从多个动作标签中产生长期3D人类运动的问题。以前的两种主要方法(例如动作和运动条件方法)有解决此问题的局限性。动作条件的方法从单个动作产生了一系列运动。因此,它不能产生由动作之间多种动作和过渡组成的长期运动。同时,运动条件的方法从初始运动产生了未来的动作。生成的未来动作仅取决于过去,因此它们无法由用户的所需操作控制。我们提出了MultiaCT,这是从多个动作标签中产生长期3D人类运动的第一个框架。 MultiACT使用统一的经常发电系统考虑了动作和运动条件。它重复采用先前的动作和动作标签;然后,它产生平稳的过渡和给定动作的运动。结果,多功能会产生由给定多个动作标签序列控制的现实长期运动。代码可在https://github.com/taeryunglee/multiact_release上找到。

We tackle the problem of generating long-term 3D human motion from multiple action labels. Two main previous approaches, such as action- and motion-conditioned methods, have limitations to solve this problem. The action-conditioned methods generate a sequence of motion from a single action. Hence, it cannot generate long-term motions composed of multiple actions and transitions between actions. Meanwhile, the motion-conditioned methods generate future motions from initial motion. The generated future motions only depend on the past, so they are not controllable by the user's desired actions. We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a unified recurrent generation system. It repetitively takes the previous motion and action label; then, it generates a smooth transition and the motion of the given action. As a result, MultiAct produces realistic long-term motion controlled by the given sequence of multiple action labels. Codes are available here at https://github.com/TaeryungLee/MultiAct_RELEASE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源