Long Short-Term Dynamic Prototype Alignment Learning for Video Anomaly Detection
Long Short-Term Dynamic Prototype Alignment Learning for Video Anomaly Detection
Chao Huang, Jie Wen, Chengliang Liu, Yabo Liu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 866-874.
https://doi.org/10.24963/ijcai.2024/96
Video anomaly detection (VAD) is the core problem of intelligent video surveillance. Previous methods commonly adopt the unsupervised paradigm of frame reconstruction or prediction. However, the lack of mining of temporal dependent relationships and diversified event patterns within videos limit the performance of existing methods. To tackle these problems, we propose a novel prototype-guided and dynamic-aware long-distance frame prediction paradigm for VAD. Specifically, we develop a prototype-guided dynamics matching network (PDM-Net) to enhance the discriminant and robustness of anomaly detector. To explore the temporal contexts, we equip PDM-Net with a long short-term dynamic prototype alignment learning mechanism, which stores long-term dynamic prototypes into memory bank and learns how to recall long-term dynamic prototypes with short-term dynamics. As a result, the short input sequences can recall long-term dynamic prototypes stored in the memory bank to achieve the task of long-distance frame prediction. Besides, a feature discrimination module is adopted to extract the representative dynamic features of various normal events meanwhile preserving the diversity of normal patterns. Experimental results on four public datasets demonstrate the superiority of our method.
Keywords:
Computer Vision: CV: Video analysis and understanding
Computer Vision: CV: Multimodal learning
Machine Learning: ML: Unsupervised learning