Scene Consistency Representation Learning for Video Scene Segmentation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Haoqian Wu1,2,3,4 Keyu Chen2 Yanan Luo2 Ruizhi Qiao2 Bo Ren2 Haozhe Liu1,3,4,5 Weicheng Xie1,3,4 Linlin Shen1,3,4
1Shenzhen University 2Tencent YouTu Lab 3Shenzhen Institute of Artificial Intelligence and Robotics for Society
4 Guangdong Key Laboratory of Intelligent Information Processing 5KAUST
Figure 1: An illustration of representation learning methods from the shot-to-scene perspective. Several continuous shots are shown in Fig. (a), where existing SSL approaches obtain positive pairs from the adjacent shots (e.g., by performing Nearest Neighbor (NN) Selection). While we propose to look further for scenes that are often crossed over, as Scene A/C and Scene B/E shown in Fig. (b), where positive samples are explored in a broader region and the shots are clustered to the same scene in the feature representation space, i.e., Fig. (c). Best viewed in color.
Abstract
A long-term video, such as a movie or TV show, is composed of various scenes, each of which represents a series of shots sharing the same semantic story. Spotting the correct scene boundary from the long-term video is a challenging task, since a model must understand the storyline of the video to figure out where a scene starts and ends. To this end, we propose an effective Self-Supervised Learning (SSL) framework to learn better shot representations from unlabeled long-term videos. More specifically, we present an SSL scheme to achieve scene consistency, while exploring considerable data augmentation and shuffling methods to boost the model generalizability. Instead of explicitly learning the scene boundary features as in the previous methods, we introduce a vanilla temporal model with less inductive bias to verify the quality of the shot features. Our method achieves the state-of-the-art performance on the task of Video Scene Segmentation. Additionally, we suggest a more fair and reasonable benchmark to evaluate the performance of Video Scene Segmentation methods. The code is made available.
Figure 2: The pipeline of the proposed method. (a) Unsupervised Representation Learning Stage for learning shot representations, where Map(i) is the mapping function for selecting positive samples. (b) Supervised Video Scene Segmentation Stage, where the quality of the shot representations is evaluated under the protocols of the non-temporal (MLP) and temporal (Bi-LSTM) models.
Figure 3: The illustration of four different selection strategies for positive pairs. Best viewed in color.
Figure 4: The illustration of Scene Agnostic Clip-Shuffling. Clips are spliced disorderly for training and each clip contains ρ continuous shots.
Figure 5: The illustration of boundary based model (a) and boundary free model (b) for Video Scene Segmentation.
Figure 6: Loss evolution curves and AP results of the training with different selection strategies.
Figure 7: The visualization results of shot retrieval. Overall, NN tends to select adjacent shots, Self shows less relevance to the query and ImageNet retrieves many kinds of boats. Compared with the other methods, the results of SC are more consistent in the semantic information, i.e., there is a man staying in the boat, and SC achieves a larger span (i.e., from 641 to 850) than NN according to shot IDs. Meanwhile, SC shows better robustness against the interference of pink smoke in the 994-th shot as the results are more pure.
Acknowledgments
The work was supported by the Na-tional Natural Science Foundation of China under grants no. 61602315, 91959108, the Science and Tech-nology Project of Guangdong Province under grant no. 2020A1515010707, the Science and Technology Innovation Commission of Shenzhen under grant no. JCYJ20190808165203670.
Bibtex
@inproceedings{wu2022scence,
title={Scene Consistency Representation Learning for Video Scene Segmentation},
author={Haoqian Wu, Keyu Chen, Yanan Luo, Ruizhi Qiao, Bo Ren, Haozhe Liu, Weicheng Xie, Linlin Shen},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2022}
}
Downloads