论文标题
通过规范映射生成自回旋的3D形状
Autoregressive 3D Shape Generation via Canonical Mapping
论文作者
论文摘要
通过在顺序数据中建模长期依赖性的能力,变形金刚在各种生成任务(例如图像,音频和文本生成)中表现出了出色的性能。然而,由于模棱两可的顺序化过程和不可行的计算负担,很少探索它们在产生结构较低和大量的数据格式(例如高分辨率点云)中的结构性较低和大量的数据格式。在本文中,我们旨在进一步利用变形金刚的力量,并雇用它们执行3D Point Cloud生成的任务。关键思想是通过学习的规范空间将一个类别的点云分解为形状组成的语义对齐序列。然后可以对这些形状组成进行量化,并用于学习点云生成的上下文丰富的构图代码。点云重建和无条件产生的实验结果表明,我们的模型对最新方法的表现有利。此外,我们的模型很容易扩展到多模式形状完成,作为条件形状生成的应用。
With the capacity of modeling long-range dependencies in sequential data, transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation. Yet, taming them in generating less structured and voluminous data formats such as high-resolution point clouds have seldom been explored due to ambiguous sequentialization processes and infeasible computation burden. In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation. The key idea is to decompose point clouds of one category into semantically aligned sequences of shape compositions, via a learned canonical space. These shape compositions can then be quantized and used to learn a context-rich composition codebook for point cloud generation. Experimental results on point cloud reconstruction and unconditional generation show that our model performs favorably against state-of-the-art approaches. Furthermore, our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.