论文标题
MEGACRN:用于时空建模的元式卷积卷积反复网络
MegaCRN: Meta-Graph Convolutional Recurrent Network for Spatio-Temporal Modeling
论文作者
论文摘要
时空建模是多元时间序列预测的规范任务,在AI社区中一直是一个重要的研究主题。为了解决图表中隐含的潜在异质性和非平稳性,在这项研究中,我们提出了时空的时空元图学习,作为一种新的图形结构学习机制,对时空数据。具体而言,我们通过将元节点库动力的元数据学习者插入GCRN编码器驱动器,将此想法实现到Meta-Graph卷积复发网络(MEGACRN)中。我们对两个基准数据集(METR-LA和PEMS-BAY)和一个大规模时空数据集进行了全面评估,其中包含各种非平稳现象的变体。我们的模型在所有三个数据集(超过27%的MAE和34%的RMSE)上的表现在很大程度上超过了最先进的程度。此外,通过一系列定性评估,我们证明了我们的模型可以明确地拆开具有不同模式的位置和时间插槽,并且可以牢固地适应不同的异常情况。代码和数据集可在https://github.com/deepkashiwa20/megacrn上找到。
Spatio-temporal modeling as a canonical task of multivariate time series forecasting has been a significant research topic in AI community. To address the underlying heterogeneity and non-stationarity implied in the graph streams, in this study, we propose Spatio-Temporal Meta-Graph Learning as a novel Graph Structure Learning mechanism on spatio-temporal data. Specifically, we implement this idea into Meta-Graph Convolutional Recurrent Network (MegaCRN) by plugging the Meta-Graph Learner powered by a Meta-Node Bank into GCRN encoder-decoder. We conduct a comprehensive evaluation on two benchmark datasets (METR-LA and PEMS-BAY) and a large-scale spatio-temporal dataset that contains a variaty of non-stationary phenomena. Our model outperformed the state-of-the-arts to a large degree on all three datasets (over 27% MAE and 34% RMSE). Besides, through a series of qualitative evaluations, we demonstrate that our model can explicitly disentangle locations and time slots with different patterns and be robustly adaptive to different anomalous situations. Codes and datasets are available at https://github.com/deepkashiwa20/MegaCRN.