论文标题

部分可观测时空混沌系统的无模型预测

VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning

论文作者

Yamazaki, Kashu, Vo, Khoa, Truong, Sang, Raj, Bhiksha, Le, Ngan

论文摘要

储层计算是预测湍流的有力工具,其简单的架构具有处理大型系统的计算效率。然而,其实现通常需要完整的状态向量测量和系统非线性知识。我们使用非线性投影函数将系统测量扩展到高维空间,然后将其输入到储层中以获得预测。我们展示了这种储层计算网络在时空混沌系统上的应用,该系统模拟了湍流的若干特征。我们表明,使用径向基函数作为非线性投影器,即使只有部分观测并且不知道控制方程,也能稳健地捕捉复杂的系统非线性。最后,我们表明,当测量稀疏、不完整且带有噪声,甚至控制方程变得不准确时,我们的网络仍然可以产生相当准确的预测,从而为实际湍流系统的无模型预测铺平了道路。

Video paragraph captioning aims to generate a multi-sentence description of an untrimmed video with several temporal event locations in coherent storytelling. Following the human perception process, where the scene is effectively understood by decomposing it into visual (e.g. human, animal) and non-visual components (e.g. action, relations) under the mutual influence of vision and language, we first propose a visual-linguistic (VL) feature. In the proposed VL feature, the scene is modeled by three modalities including (i) a global visual environment; (ii) local visual main agents; (iii) linguistic scene elements. We then introduce an autoregressive Transformer-in-Transformer (TinT) to simultaneously capture the semantic coherence of intra- and inter-event contents within a video. Finally, we present a new VL contrastive loss function to guarantee learnt embedding features are matched with the captions semantics. Comprehensive experiments and extensive ablation studies on ActivityNet Captions and YouCookII datasets show that the proposed Visual-Linguistic Transformer-in-Transform (VLTinT) outperforms prior state-of-the-art methods on accuracy and diversity. Source code is made publicly available at: https://github.com/UARK-AICV/VLTinT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源