Sorry, you need to enable JavaScript to visit this website.

SVMV: SPATIOTEMPORAL VARIANCE-SUPERVISED MOTION VOLUME FOR VIDEO FRAME INTERPOLATION

Citation Author(s):
Yao Luo, Jinshan Pan, Jinhui Tang
Submitted by:
Yao Luo
Last updated:
26 May 2023 - 7:31pm
Document Type:
Poster
Event:
Presenters:
Yao Luo
Paper Code:
6396
 

High-performance video frame interpolation is challenging for complex scenes with diverse motion and occlusion characteristics. Existing methods, deploying off-the-shelf flow estimators to acquire initial characterizations refined by multiple subsequent models, often require heavy network architectures that are not practical for resource constrained systems. We investigate the unary potentials of the characterizations to improve efficiency. Specifically, we design a lightweight neural network to construct motion volumes via ensembles of offset approximations, and propose a spatiotemporal variance-aware loss to supervise the network learning. For network compactness, our spatiotemporal variance-supervised motion volume (SVMV) utilizes shared spatiotemporal representations via correlations among approximations, of which the diversifications are exploited to better leverage the network's expressiveness through the spatiotemporal variances of motions and occlusions within the time interval to be interpolated. Experiments on publicly available datasets show that our method performs favorably against existing methods with a more compact network and less runtime.

up
0 users have voted: