论文标题
3D-GIF:3D控制对象通过隐式分解表示
3D-GIF: 3D-Controllable Object Generation via Implicit Factorized Representations
论文作者
论文摘要
虽然基于NERF的3D感知图像生成方法可实现观点控制,但各种3D应用程序仍将采用限制。由于其视图依赖性和轻度的体积表示形式,3D几何形状呈现不切实际的质量,并且应针对每个所需的观点重新渲染颜色。为了将3D适用性从3D感知的图像生成到3D可控制的对象生成,我们提出了分解的表示,这些表示的表示独立且光识别态,以及具有随机采样光条件的训练方案。我们通过可视化分解的表示,重新亮起的图像和反照率纹理的网格来证明我们方法的优越性。此外,我们表明我们的方法通过可视化和定量比较来提高生成的几何形状的质量。据我们所知,这是第一批用未经其他标签或假设提取反驳纹理的网眼的作品。
While NeRF-based 3D-aware image generation methods enable viewpoint control, limitations still remain to be adopted to various 3D applications. Due to their view-dependent and light-entangled volume representation, the 3D geometry presents unrealistic quality and the color should be re-rendered for every desired viewpoint. To broaden the 3D applicability from 3D-aware image generation to 3D-controllable object generation, we propose the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions. We demonstrate the superiority of our method by visualizing factorized representations, re-lighted images, and albedo-textured meshes. In addition, we show that our approach improves the quality of the generated geometry via visualization and quantitative comparison. To the best of our knowledge, this is the first work that extracts albedo-textured meshes with unposed 2D images without any additional labels or assumptions.