Sorry, you need to enable JavaScript to visit this website.

3D-HOG EMBEDDING FRAMEWORKS FOR SINGLE AND MULTI-VIEWPOINTS ACTION RECOGNITION BASED ON HUMAN SILHOUETTES

Citation Author(s):
Federico Angelini, Zeyu Fu, Sergio A. Velastin, Jonathon A. Chambers, Syed Mohsen Naqvi
Submitted by:
Federico Angelini
Last updated:
12 April 2018 - 4:29pm
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Federico Angelini
Paper Code:
2125
 

Given the high demand for automated systems for human action recognition, great efforts have been undertaken in recent decades to progress the field. In this paper, we present frameworks for single and multi-viewpoints action recognition based on Space-Time Volume (STV) of human silhouettes and 3D-Histogram of Oriented Gradient (3D-HOG) embedding. We exploit fast-computational approaches involving Principal Component Analysis (PCA) over the local feature spaces for compactly describing actions as combinations of local gestures and L2-Regularized Logistic Regression (L2-RLR) for learning the action model from local features. Outperforming results on Weizmann and i3DPost datasets confirm efficacy of the proposed approaches as compared to the baseline method and other works, in terms of accuracy and robustness to appearance changes.

up
1 user has voted: zeyu fu