Sorry, you need to enable JavaScript to visit this website.

ROBUST VISUAL TRACKING VIA DEEP DISCRIMINATIVE MODEL

Citation Author(s):
Heng Fan; Jinhai Xiang; Guoliang Li;Fuchuan Ni
Submitted by:
Jinhai Xiang
Last updated:
27 February 2017 - 10:18pm
Document Type:
Poster
Document Year:
2017
Event:
Presenters:
Jinhai Xiang
Paper Code:
ICASSP1701
 

In this paper, we exploit deep convolutional features for object appearance modeling and propose a simple while effective deep iscriminative model (DDM) for visual tracking. The proposed DDM takes as input the deep features and outputs an object-background confidence map. Considering that both spatial information from lower convolutional layers and semantic information from higher layers benefit object tracking, we construct multiple deep discriminative models (DDMs) for each layer and combine these confidence maps from each layer to obtain the final object-background confidence map. To reduce the risk of model drift, we propose to adopt a saliency method to generate object candidates. Object tracking is then achieved by finding the candidate with the largest confidence value. Experiments on a large-scale tracking benchmark demonstrate that the propose method performs favorably against state-of-the-art trackers.

up
0 users have voted: