论文标题
CNERV:视觉数据的内容自适应神经表示
CNeRV: Content-adaptive Neural Representation for Visual Data
论文作者
论文摘要
甚至在深度学习的普遍之前,在计算机视觉社区中的压缩和重建已经在计算机视觉社区中进行了广泛研究。最近,有些人使用深度学习来改善或完善现有的管道,而另一些人则提出了端到端的方法,包括自动编码器和隐性神经表示,例如Siren and Nerv。在这项工作中,我们提出了与内容自适应嵌入(CNERV)的神经视觉表示形式,它结合了自动编码器的普遍性与隐式表示的简单性和紧凑性。我们介绍了一种新颖的内容自适应嵌入,该嵌入是统一,简洁和内部(视频内)可推广的,它可以将强大的解码器与单层编码器相称。我们将NERV的性能(一种最先进的隐式神经表示)与训练期间在训练过程中所看到的帧的重建任务相匹配,而在训练过程中跳过的框架(看不见的图像)。为了在看不见的图像上实现类似的重建质量,由于缺乏内部概括,NERS需要更多的时间才能超出人均。 CNERV具有相同的潜在代码长度和相似的型号大小,在重建可见图像和看不见的图像方面都优于自动编码器。我们还显示了视觉数据压缩的有希望的结果。更多详细信息可以在项目pagehttps://haochen-rye.github.io/cnerv/中找到
Compression and reconstruction of visual data have been widely studied in the computer vision community, even before the popularization of deep learning. More recently, some have used deep learning to improve or refine existing pipelines, while others have proposed end-to-end approaches, including autoencoders and implicit neural representations, such as SIREN and NeRV. In this work, we propose Neural Visual Representation with Content-adaptive Embedding (CNeRV), which combines the generalizability of autoencoders with the simplicity and compactness of implicit representation. We introduce a novel content-adaptive embedding that is unified, concise, and internally (within-video) generalizable, that compliments a powerful decoder with a single-layer encoder. We match the performance of NeRV, a state-of-the-art implicit neural representation, on the reconstruction task for frames seen during training while far surpassing for frames that are skipped during training (unseen images). To achieve similar reconstruction quality on unseen images, NeRV needs 120x more time to overfit per-frame due to its lack of internal generalization. With the same latent code length and similar model size, CNeRV outperforms autoencoders on reconstruction of both seen and unseen images. We also show promising results for visual data compression. More details can be found in the project pagehttps://haochen-rye.github.io/CNeRV/