论文标题

一个基于深度参考的RPCA网络,用于视频前景分离

A Deep-Unfolded Reference-Based RPCA Network For Video Foreground-Background Separation

论文作者

Van Luong, Huynh, Joukovsky, Boris, Eldar, Yonina C., Deligiannis, Nikos

论文摘要

深度展开的神经网络是通过展开优化算法的迭代设计的。与优化对应物相比,它们可以证明可以实现更快的收敛性和更高的准确性。本文提出了一种新的基于深度折叠的网络设计,用于可靠的主组件分析(RPCA),并应用于视频前景 - 背景分离。与现有设计不同,我们的方法着重于建模连续视频框架稀疏表示之间的时间相关性。为此,我们执行迭代算法的展开,用于求解重新加权的$ \ ell_1 $ - $ \ ell_1 $最小化;这种展开导致每个神经元自适应地学习的近端近端算子(又称不同的激活函数)。使用移动MNIST数据集进行的实验表明,在视频前景 - 背景分离的任务中,提出的网络在最近提出的最新RPCA网络上优于最近提出的最新RPCA网络。

Deep unfolded neural networks are designed by unrolling the iterations of optimization algorithms. They can be shown to achieve faster convergence and higher accuracy than their optimization counterparts. This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA) with application to video foreground-background separation. Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames. To this end, we perform the unfolding of an iterative algorithm for solving reweighted $\ell_1$-$\ell_1$ minimization; this unfolding leads to a different proximal operator (a.k.a. different activation function) adaptively learned per neuron. Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源