论文标题

重写深层生成模型

Rewriting a Deep Generative Model

论文作者

Bau, David, Liu, Steven, Wang, Tongzhou, Zhu, Jun-Yan, Torralba, Antonio

论文摘要

诸如GAN之类的深层生成模型学会了建模有关目标分布的丰富语义和物理规则,但是到目前为止,它已经掩盖了该规则如何在网络中编码,或者如何更改规则。在本文中,我们介绍了一个新的问题设置:对由深层生成模型编码的特定规则的操纵。为了解决问题,我们提出了一种公式,其中通过将深网的层作为线性关联内存来更改所需的规则。我们得出了一种用于修改关联内存的一个条目的算法,并证明可以在最先进的生成模型的层中找到并修改几个有趣的结构规则。我们提出了一个用户界面,以使用户能够交互性更改生成模型的规则以实现所需的效果,并显示了几种概念验证应用程序。最后,在多个数据集上的结果证明了我们方法对标准微调方法的优势和编辑传输算法。

A deep generative model such as a GAN learns to model a rich set of semantic and physical rules about the target distribution, but up to now, it has been obscure how such rules are encoded in the network, or how a rule could be changed. In this paper, we introduce a new problem setting: manipulation of specific rules encoded by a deep generative model. To address the problem, we propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory. We derive an algorithm for modifying one entry of the associative memory, and we demonstrate that several interesting structural rules can be located and modified within the layers of state-of-the-art generative models. We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects, and we show several proof-of-concept applications. Finally, results on multiple datasets demonstrate the advantage of our method against standard fine-tuning methods and edit transfer algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源