论文标题

通过多种优化的删除表示和产生的表示

Disentangled Representation Learning and Generation with Manifold Optimization

论文作者

Pandey, Arun, Fanuel, Michael, Schreurs, Joachim, Suykens, Johan A. K.

论文摘要

解开是表示形式学习中的有用属性,可提高生成模型的解释性,例如变异自动编码器(VAE),生成对抗模型及其许多变体。通常,在这样的模型中,分离性能的提高是随着发电质量而交易的。在潜在空间模型的背景下,这项工作提出了一个表示学习框架,该框架通过鼓励变异的正交方向明确促进分离。提出的目标是自动编码器误差项的总和以及特征空间中的主组件分析重建误差。这具有对限制内核机的解释,其特征向量矩阵可在Stiefel歧管上值。我们的分析表明,这种结构通过将潜在空间中的主要方向与数据空间正交变化的方向相匹配,从而促进了分离。在交替的最小化方案中,我们使用Cayley Adam算法 - Stiefel歧管上的随机优化方法以及Adam Optimizer。我们的理论讨论和各种实验表明,在发电质量和分离的表示学习方面,提出的模型都改善了许多VAE变体。

Disentanglement is a useful property in representation learning which increases the interpretability of generative models such as Variational autoencoders (VAE), Generative Adversarial Models, and their many variants. Typically in such models, an increase in disentanglement performance is traded-off with generation quality. In the context of latent space models, this work presents a representation learning framework that explicitly promotes disentanglement by encouraging orthogonal directions of variations. The proposed objective is the sum of an autoencoder error term along with a Principal Component Analysis reconstruction error in the feature space. This has an interpretation of a Restricted Kernel Machine with the eigenvector matrix-valued on the Stiefel manifold. Our analysis shows that such a construction promotes disentanglement by matching the principal directions in the latent space with the directions of orthogonal variation in data space. In an alternating minimization scheme, we use Cayley ADAM algorithm - a stochastic optimization method on the Stiefel manifold along with the ADAM optimizer. Our theoretical discussion and various experiments show that the proposed model improves over many VAE variants in terms of both generation quality and disentangled representation learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源