Sorry, you need to enable JavaScript to visit this website.

Multimodal Stereoscopic Movie Summarization Conforming to Narrative Characteristics

Citation Author(s):
Anastasios Tefas, Nikos Nikolaidis, Ioannis Pitas
Submitted by:
Ioannis Mademlis
Last updated:
13 September 2017 - 11:12am
Document Type:
Poster
Document Year:
2017
Event:
Presenters:
Ioannis Mademlis
Paper Code:
4656
 

Video summarization is a timely and rapidly developing research field with broad commercial interest, due to the increasing availability of massive video data. Relevant algorithms face the challenge of needing to achieve a careful balance between summary compactness, enjoyability and content coverage. The specific case of stereoscopic $3$D theatrical films has become more important over the past years, but not received corresponding research attention. In the present work, a multi-stage, multimodal summarization process for such stereoscopic movies is proposed, that is able to extract a short, representative video skim conforming to narrative characteristics from a $3$D film. At the initial stage, a novel, low-level video frame description method is introduced (Frame Moments Descriptor, or FMoD), that compactly captures informative image statistics from luminance, color, optical flow and stereoscopic disparity video data, both in a global and in a local scale. Thus, scene texture, illumination, motion and geometry properties may succinctly be contained within a single frame feature descriptor, which can subsequently be employed as a building block in any key-frame extraction scheme, e.g., for intra-shot frame clustering. The computed key-frames are then used to construct a movie summary in the form of a video skim, which is post-processed in a manner that also takes into account the audio modality. The next stage of the proposed summarization pipeline essentially performs shot pruning, controlled by a user-provided shot retention parameter, that removes segments from the skim based on the narrative prominence of movie characters in both the visual and the audio modalities. This novel process (Multimodal Shot Pruning, or MSP) is algebraically modelled as a multimodal matrix Column Subset Selection Problem, which is solved using an evolutionary computing approach. Subsequently, disorienting editing effects induced by summarization are dealt with, through manipulation of the video skim. At the last step, the skim is suitably post-processed in order to reduce stereoscopic video defects that may cause visual fatigue.

up
0 users have voted: