论文标题

自我监督的gan压缩

Self-Supervised GAN Compression

论文作者

Yu, Chong, Pool, Jeff

论文摘要

深度学习的成功导致了越来越大的模型来处理越来越复杂的任务。训练有素的模型可以包含数百万个参数。这些大型模型是计算和内存密集型的,这使得它们以最小化的延迟,吞吐量和存储要求部署它们是一个挑战。一些模型压缩方法已成功地应用于图像分类和检测或语言模型,但是很少有工作压缩生成的对抗网络(GAN)执行复杂的任务。在本文中,我们表明,使用现有方法不能将标准模型压缩技术(重量修剪)应用于gan。然后,我们开发了一种自我监督的压缩技术,该技术使用训练有素的判别器来监督压缩发电机的训练。我们表明,该框架具有高度的稀疏度具有令人信服的性能,可以轻松地应用于新任务和模型,并可以在不同的修剪粒度之间进行有意义的比较。

Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters. These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has a compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different pruning granularities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源