论文标题
通过可变的播放速度预测视频的自我监督的视觉学习
Self-Supervised Visual Learning by Variable Playback Speeds Prediction of a Video
论文作者
论文摘要
我们通过预测视频的可变播放速度来提出一种自我监督的视觉学习方法。没有语义标签,我们通过根据时间连贯性的假设来利用不同的播放速度来利用视觉外观的变化来了解视频的时空视觉表示。为了了解整个视频中的时空视觉变化,我们不仅预测了单个播放速度,而且还产生了具有随机起点的各种播放速度和方向的剪辑。因此,可以从视频的元信息(播放速度和方向)中成功学习视觉表示。我们还提出了一种可靠的临时群体归一化方法,该方法可以应用于3D卷积网络,以改善表示时间特征,将时间特征分为几个组,并使用不同的相应参数将每个组归一化。我们通过将方法微调到UCF-101和HMDB-51的动作识别和视频检索任务来验证我们的方法的有效性。
We propose a self-supervised visual learning method by predicting the variable playback speeds of a video. Without semantic labels, we learn the spatio-temporal visual representation of the video by leveraging the variations in the visual appearance according to different playback speeds under the assumption of temporal coherence. To learn the spatio-temporal visual variations in the entire video, we have not only predicted a single playback speed but also generated clips of various playback speeds and directions with randomized starting points. Hence the visual representation can be successfully learned from the meta information (playback speeds and directions) of the video. We also propose a new layer dependable temporal group normalization method that can be applied to 3D convolutional networks to improve the representation learning performance where we divide the temporal features into several groups and normalize each one using the different corresponding parameters. We validate the effectiveness of our method by fine-tuning it to the action recognition and video retrieval tasks on UCF-101 and HMDB-51.