论文标题
SEQ2TENS:通过低级张量投影对序列的有效表示
Seq2Tens: An Efficient Representation of Sequences by Low-Rank Tensor Projections
论文作者
论文摘要
顺序数据,例如时间序列,视频或文本,因为有序结构会产生复杂的依赖性,因此可以具有挑战性。这是核心是非交换性,从某种意义上说,重新排序序列的要素可以完全改变其含义。我们使用经典的数学对象 - 张量代数 - 捕获此类依赖性。为了解决高度张量的先天计算复杂性,我们使用低量张量投影的组合物。这为神经网络提供了模块化且可扩展的构建块,这些块为标准基准测试提供了最先进的性能,例如多元时间序列分类和视频的生成模型。
Sequential data such as time series, video, or text can be challenging to analyse as the ordered structure gives rise to complex dependencies. At the heart of this is non-commutativity, in the sense that reordering the elements of a sequence can completely change its meaning. We use a classical mathematical object -- the tensor algebra -- to capture such dependencies. To address the innate computational complexity of high degree tensors, we use compositions of low-rank tensor projections. This yields modular and scalable building blocks for neural networks that give state-of-the-art performance on standard benchmarks such as multivariate time series classification and generative models for video.