论文标题

Hypertime:时间序列的隐式神经表示

HyperTime: Implicit Neural Representation for Time Series

论文作者

Fons, Elizabeth, Sztrajman, Alejandro, El-laham, Yousef, Iosifidis, Alexandros, Vyetrenko, Svitlana

论文摘要

隐式神经表示(INRS)最近已成为一种强大的工具,可提供准确和分辨率的数据编码。它们作为一般近似器的鲁棒性已在多种数据源中显示,并在图像,声音和3D场景表示方面进行了应用。但是,很少有人注意利用这些架构来表示时间序列数据的表示和分析。在本文中,我们使用INRS分析了时间序列的表示形式,从重建精度和训练收敛速度进行比较不同的激活函数。我们展示了如何利用这些网络的时间序列的插定,并在单变量和多变量数据上进行了应用。最后,我们提出了一个超网络体系结构,该体系结构利用INR来学习整个时间序列数据集的压缩潜在表示。我们引入了基于FFT的损失来指导培训,以便在时间序列中保留所有频率。我们表明,该网络可用于将时间序列编码为INRS,并且可以将它们的嵌入方式插值以从现有时间序列中生成新的时间序列。我们通过将其用于数据增强来评估我们的生成方法,并表明它与当前的最新方法相对于时间序列的最新方法具有竞争力。

Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data. Their robustness as general approximators has been shown in a wide variety of data sources, with applications on image, sound, and 3D scene representation. However, little attention has been given to leveraging these architectures for the representation and analysis of time series data. In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed. We show how these networks can be leveraged for the imputation of time series, with applications on both univariate and multivariate data. Finally, we propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset. We introduce an FFT-based loss to guide training so that all frequencies are preserved in the time series. We show that this network can be used to encode time series as INRs, and their embeddings can be interpolated to generate new time series from existing ones. We evaluate our generative method by using it for data augmentation, and show that it is competitive against current state-of-the-art approaches for augmentation of time series.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源