论文标题
深层可变量内核学习
Deep Latent-Variable Kernel Learning
论文作者
论文摘要
深内核学习(DKL)利用高斯过程(GP)和神经网络(NN)之间的联系来构建端到端的混合模型。它结合了NN在大量数据下学习丰富表示形式的能力和GP的非参数性能,以实现自动正则化,从而结合了模型拟合和模型复杂性之间的权衡。但是,由于自由潜在表示,确定性编码器可能会削弱以下GP部分的模型正则化,尤其是在小数据集上。因此,我们提出了一个完整的深层可变量内核学习(DLVKL)模型,其中潜在变量对正则表示的随机编码执行随机编码。我们从两个方面进一步增强了DLVKL:(i)通过神经随机微分方程(NSDE)的表达性变异后部以提高近似质量,以及(ii)混合事先从SDE PIRES和后部获得知识,以达到灵活的折衷。密集的实验表明,DLVKL-NSDE的性能类似于小型数据集上校准的GP,并且在大型数据集上的现有深GPS胜过。
Deep kernel learning (DKL) leverages the connection between Gaussian process (GP) and neural networks (NN) to build an end-to-end, hybrid model. It combines the capability of NN to learn rich representations under massive data and the non-parametric property of GP to achieve automatic regularization that incorporates a trade-off between model fit and model complexity. However, the deterministic encoder may weaken the model regularization of the following GP part, especially on small datasets, due to the free latent representation. We therefore present a complete deep latent-variable kernel learning (DLVKL) model wherein the latent variables perform stochastic encoding for regularized representation. We further enhance the DLVKL from two aspects: (i) the expressive variational posterior through neural stochastic differential equation (NSDE) to improve the approximation quality, and (ii) the hybrid prior taking knowledge from both the SDE prior and the posterior to arrive at a flexible trade-off. Intensive experiments imply that the DLVKL-NSDE performs similarly to the well calibrated GP on small datasets, and outperforms existing deep GPs on large datasets.