Most techniques today focus either on trajectory clustering or capturing intrinsic scene features to detect and identify the abnormal content in videos. On lines similar to the latter paradigm, we model the usual and dominant behavior of videos using unsupervised probabilistic topic models, as complement of which we identify the “anomalous” ones. Through this paper, we make the following contributions: (a) We design the visual vocabulary by incorporating the location and size information with quantized spatio-temporal descriptors, which is particularly relevant for static camera scenes. The visual clips over this vocabulary are then represented in latent topic space using models like pLSA. (b) We propose an algorithm to quantify the anomalous content in a video clip by projecting the information learned from training on to the clip. (c) Based on the algorithm, we finally detect whether the video clip is abnormal or not and if positive, further localize the anomaly in spatio-temporal domain. The performance of our approach is demonstrated by experimental evaluation on surveillance video dataset.
Download Full PDF Version (Non-Commercial Use)