Sorry, you need to enable JavaScript to visit this website.

Dynamic Probabilistic Linear Discriminant Analysis for Face Recognition in Videos

Citation Author(s):
Alessandro Fabris, Mihalis Nicolau, Irene Kotsia, Stefanos Zafeiriou
Submitted by:
Alessandro Fabris
Last updated:
5 March 2017 - 4:58am
Document Type:
Poster
Document Year:
2017
Event:
Presenters:
Alessandro Fabris
Paper Code:
MLSP-P9.3
 

Component Analysis (CA) for computer vision and machine learning comprises of a set of statistical techniques that decompose visual data to appropriate latent components that are relevant to the task-at-hand, such as alignment, clustering, segmentation, classification etc. The past few years we have witnessed an explosion of research in component analysis, introducing both novel deterministic and probabilistic models (e.g., Probabilistic Principal Component Analysis (PPCA), Probabilistic Linear Discriminant Analysis (PLDA), Probabilistic Canonical Correlation Analysis (PCCA) etc.). A popular generative probabilistic component analysis method that incorporates the knowledge regarding class-labels is PLDA. PLDA introduces two latent spaces, one that is class-specific and one that is sample-specific. PLDA is a static model, meaning that it does not model any feature-level temporal dependencies that may arise in the data at hand. As has been repeatedly shown in the literature, modeling temporal dynamics is crucial for capturing dynamics in temporal data (e.g., analysis of faces over time). In this paper, we propose the first, to the best of our knowledge, PLDA that models and captures dynamic information, the so-called Dynamic-PLDA (DPLDA). The DPLDA is a generative model suitable for video classification and is able to jointly model the label information (e.g., face identity, which is consistent over videos of the same person), as well as the dynamic variations of each individual video. We show how to efficiently and effectively learn the models parameters and apply the models for classifying videos. Finally, we demonstrate the efficacy of the proposed model in various applications such as face and facial expression recognition.

up
0 users have voted: