论文标题

文本生成的对抗互相信息

Adversarial Mutual Information for Text Generation

论文作者

Pan, Boyuan, Yang, Yazheng, Liang, Kaizhao, Kailkhura, Bhavya, Jin, Zhongming, Hua, Xian-Sheng, Cai, Deng, Li, Bo

论文摘要

源和目标之间最大化互信息(MI)的最新进展已证明其在文本生成中的有效性。但是,以前的作品几乎不关注建模MI的向后网络(即,从目标到源的依赖性),这对于变异信息的紧密度至关重要。在本文中,我们提出了对抗性相互信息(AMI):一种文本生成框架,该框架形成是一种新型的鞍点(Min-Max)优化,旨在识别源和目标之间的关节相互作用。在此框架内,向前和落后网络能够通过比较实际和合成数据分布来迭代地促进或贬低彼此生成的实例。我们还制定了一种潜在的噪声采样策略,该策略利用高级语义空间的随机变化来增强生成过程中的长期依赖性。基于不同文本生成任务的广泛实验表明,所提出的AMI框架可以显着胜过几个强大的基线,我们还表明,AMI有可能导致最大信息最大信息的最大相互信息的更紧密的下限。

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other's generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源