Sorry, you need to enable JavaScript to visit this website.

Canonical correlation analysis (CCA) describes the relationship between two sets of variables by finding linear combinations of the variables with maximal correlation. Recently, under the assumption that the leading canonical correlation directions are sparse, various procedures have been proposed for many high-dimensional applications to improve the interpretability of CCA. However all these procedures have the inconvenience of not preserving the sparsity among the retained leading canonical directions. To address this issue, a new sparse CCA method is proposed in this paper.

Categories:
27 Views

The ability of deep neural networks to extract complex statistics and learn high level features from vast datasets is proven.Yet current deep learning approaches suffer from poor sample efficiency in stark contrast to human perception. Fewshot learning algorithms such as matching networks or ModelAgnostic Meta Learning (MAML) mitigate this problem, enabling fast learning with few examples. In this paper, we ex-tend the MAML algorithm to point cloud data using a Point-Net Architecture.

Categories:
207 Views

Motions of facial components convey significant information of facial expressions. Although remarkable advancement has been made, the dynamic of facial topology has not been fully exploited. In this paper, a novel facial expression recognition (FER) algorithm called Spatial Temporal Semantic Graph Network (STSGN) is proposed to automatically learn spatial and temporal patterns through end-to-end feature learning from facial topology structure.

Categories:
54 Views

The task of head pose estimation from a single depth image is challenging, due to the presence of large pose variations, occlusions and inhomegeneous facial feature space. To solve the problem, we propose Deep Regression Forest with Soft-Attention (SA-DRF) in a multi-task learning setup. It can be integrated with a general feature learning net and jointly learned in an end-to-end manner. The soft-attention module is facilitated to learn soft masks from the general features and feeds the forest with task-specific features to regress head poses.

Categories:
41 Views

Pages