Sorry, you need to enable JavaScript to visit this website.

View-Invariant Action Recognition From RGB Data via 3D Pose Estimation

Citation Author(s):
Enjie Ghorbel, Konstantinos Papadopoulos, Girum G. Demisse, Djamila Aouada, Björn Ottersten
Submitted by:
Renato Baptista
Last updated:
8 May 2019 - 7:19am
Document Type:
Document Year:
Presenters Name:
Renato Baptista



In this paper, we propose a novel view-invariant action recognition method using a single monocular RGB camera. View-invariance remains a very challenging topic in 2D action recognition due to the lack of 3D information in RGB images. Most successful approaches make use of the concept of knowledge transfer by projecting 3D synthetic data to multiple viewpoints. Instead of relying on knowledge transfer, we propose to augment the RGB data by a third dimension by means of 3D skeleton estimation from 2D images using a CNN-based pose estimator. In order to ensure view-invariance, a pre-processing for alignment is applied followed by data expansion as a way for denoising. Finally, a Long-Short Term Memory (LSTM) architecture is used to model the temporal dependency between skeletons. The proposed network is trained to directly recognize actions from aligned 3D skeletons. The experiments performed on the challenging Northwestern-UCLA dataset show the superiority of our approach as compared to state-of-the-art ones.

0 users have voted:

Dataset Files