论文标题

基于财务时间序列的先验知识蒸馏

Prior knowledge distillation based on financial time series

论文作者

Fang, Jie, Lin, Jianwu

论文摘要

财务时间序列的主要特征之一是它们包含大量的非平稳噪声,这对于深层神经网络来说是具有挑战性的。人们通常使用各种功能来解决此问题。但是,这些功能的性能取决于超参数的选择。在本文中,我们建议使用神经网络来表示这些指标,并训练由较小网络构建的大型网络作为特征层,以微调指标代表的先验知识。在返回传播期间,通过梯度下降将先验知识从人类逻辑转移到机器逻辑。先验知识是神经网络的深刻信念,并教导网络不受非平稳噪声的影响。此外,将共同介绍用于将结构提炼成小得多的尺寸,以减少冗余特征和过度拟合的风险。此外,在梯度下降方面,较小的网络的决策比大型网络的决策更稳健和谨慎。在数值实验中,我们发现我们的算法比实际财务数据集中的传统方法更快,更准确。我们还进行实验以验证和理解该方法。

One of the major characteristics of financial time series is that they contain a large amount of non-stationary noise, which is challenging for deep neural networks. People normally use various features to address this problem. However, the performance of these features depends on the choice of hyper-parameters. In this paper, we propose to use neural networks to represent these indicators and train a large network constructed of smaller networks as feature layers to fine-tune the prior knowledge represented by the indicators. During back propagation, prior knowledge is transferred from human logic to machine logic via gradient descent. Prior knowledge is the deep belief of neural network and teaches the network to not be affected by non-stationary noise. Moreover, co-distillation is applied to distill the structure into a much smaller size to reduce redundant features and the risk of overfitting. In addition, the decisions of the smaller networks in terms of gradient descent are more robust and cautious than those of large networks. In numerical experiments, we find that our algorithm is faster and more accurate than traditional methods on real financial datasets. We also conduct experiments to verify and comprehend the method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源