Sorry, you need to enable JavaScript to visit this website.

Learning task-specific representation for Video anomaly detection with spatial-temporal attention

Citation Author(s):
Yang Liu, Jing Liu, Xiaoguang Zhu, Xiaohong Huang, Liang Song
Submitted by:
Yang Liu
Last updated:
5 May 2022 - 8:39am
Document Type:
Presentation Slides
Document Year:
2022
Event:
Presenters:
Yang Liu
Paper Code:
IVMSP-20.5
 

The automatic detection of abnormal events in surveillance videos with weak supervision has been formulated as a multiple instance learning task, which aims to localize the clips containing abnormal events temporally with the video-level labels. However, most existing methods rely on the features extracted by the pre-trained action recognition models, which are not discriminative enough for video anomaly detection. In this work, we propose a spatial-temporal attention mechanism to learn inter- and intra-correlations of video clips, and the boosted features are encouraged to be task-specific via the mutual cosine embedding loss. Experimental results on standard benchmarks demonstrate the effectiveness of the spatial-temporal attention, and our method achieves superior performance to the state-of-the-art methods.

up
0 users have voted: