Sorry, you need to enable JavaScript to visit this website.

Saliency-based Feature Selection Strategy in Stereoscopic Panoramic Video Generation

Citation Author(s):
Haoyu Wang, Daniel J. Sandin, Dan Schonfeld
Submitted by:
Haoyu Wang
Last updated:
19 April 2018 - 5:06pm
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
HAOYU WANG
Paper Code:
IVMSP-P11.7
 

In this paper, we present one saliency-based feature selection and tracking strategy in the feature-based stereoscopic panoramic video generation system. Many existing stereoscopic video composition approaches aims at producing high-quality panoramas from multiple input cameras; however, most of them directly operate image alignment on those originally detected features without any refinement or optimization. The standard global feature extraction threshold always fails to guarantee stitching correctness of all human interested regions. Thus, based on the originally commonly identified feature set, we incorporate the saliency map into the distribution of control points to remove the redundancy in texture rich regions and ensure the adequacy of selected features in visual sensitive regions. Intuitively, under the guidance of saliency change in the video sequence, one grid-based feature updating strategy is operated between consecutive frames instead of the standard global feature updating. The experiments show that our method can improve the stitching quality of visual important region without impairment to the human less-interested regions in the generated stereoscopic panoramic video.

This publication is based on work supported in part by the National Science Foundation award CNS-1456638 for SENSEI project. If you have any questions about it, please

up
0 users have voted: