Sorry, you need to enable JavaScript to visit this website.

Multimodal signal processing

VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS


Sentiment analysis is attracting more and more attentions and has become a very hot research topic due to its potential applications in personalized recommendation, opinion mining, etc. Most of the existing methods are based on either textual or visual data and can not achieve satisfactory results, as it is very hard to extract sufficient information from only one single modality data.

Paper Details

Authors:
Xingyue Chen,YunhongWang,Qingjie Liu
Submitted On:
14 September 2017 - 4:15am
Short Link:
Type:
Event:
Presenter's Name:

Document Files

Chen_ICIP17_DeepFusion_Slides.pdf

(12 downloads)

Keywords

Subscribe

[1] Xingyue Chen,YunhongWang,Qingjie Liu, "VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2000. Accessed: Oct. 22, 2017.
@article{2000-17,
url = {http://sigport.org/2000},
author = {Xingyue Chen;YunhongWang;Qingjie Liu },
publisher = {IEEE SigPort},
title = {VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS
AU - Xingyue Chen;YunhongWang;Qingjie Liu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2000
ER -
Xingyue Chen,YunhongWang,Qingjie Liu. (2017). VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/2000
Xingyue Chen,YunhongWang,Qingjie Liu, 2017. VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/2000.
Xingyue Chen,YunhongWang,Qingjie Liu. (2017). "VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS." Web.
1. Xingyue Chen,YunhongWang,Qingjie Liu. VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2000

CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL


(1)We improved the soft attention model by introducing convolutional
operations inside the LSTM cell and attention map generation process to capture the spatial layout.
(2)We built a hierarchical two layer LSTM model for action recognition. (3)We tested our model on
three widely applied datasets, the UCF sports dataset, the Olympic dataset and the HMDB51 dataset
with improved results on other published work.

poster.pdf

PDF icon poster.pdf (282 downloads)

Paper Details

Authors:
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang
Submitted On:
28 August 2017 - 11:17pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(282 downloads)

Keywords

Subscribe

[1] Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang, "CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1814. Accessed: Oct. 22, 2017.
@article{1814-17,
url = {http://sigport.org/1814},
author = {Shiyang Yan; Jeremy S. Smith; Wenjin Lu; Bailing Zhang },
publisher = {IEEE SigPort},
title = {CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL},
year = {2017} }
TY - EJOUR
T1 - CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL
AU - Shiyang Yan; Jeremy S. Smith; Wenjin Lu; Bailing Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1814
ER -
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang. (2017). CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL. IEEE SigPort. http://sigport.org/1814
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang, 2017. CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL. Available at: http://sigport.org/1814.
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang. (2017). "CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL." Web.
1. Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang. CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1814

EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS


The inherent dependencies among multiple physiological signals are crucial for multimodal emotion recognition, but have not been thoroughly exploited yet. This paper propose to use restricted Boltzmann machine (RBM) to model such dependencies.Specifically, the visible nodes of RBM represent EEG and peripheral physiological signals, and thus the connections between visible nodes and hidden nodes capture the intrinsic relations among multiple physiological signals. The RBM also generate new representation from multiple physiological signals.

Paper Details

Authors:
Yangyang Shu and Shangfei Wang
Submitted On:
13 March 2017 - 9:31pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_YANGYANG.pdf

(75 downloads)

Keywords

Subscribe

[1] Yangyang Shu and Shangfei Wang, "EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1762. Accessed: Oct. 22, 2017.
@article{1762-17,
url = {http://sigport.org/1762},
author = {Yangyang Shu and Shangfei Wang },
publisher = {IEEE SigPort},
title = {EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS},
year = {2017} }
TY - EJOUR
T1 - EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS
AU - Yangyang Shu and Shangfei Wang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1762
ER -
Yangyang Shu and Shangfei Wang. (2017). EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS. IEEE SigPort. http://sigport.org/1762
Yangyang Shu and Shangfei Wang, 2017. EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS. Available at: http://sigport.org/1762.
Yangyang Shu and Shangfei Wang. (2017). "EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS." Web.
1. Yangyang Shu and Shangfei Wang. EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1762

Personalized video emotion tagging through a topic model


The inherent dependencies among video content, personal characteristics, and perceptual emotion are crucial for personalized video emotion tagging, but have not been thoroughly exploited. To address this, we propose a novel topic model to capture such inherent dependencies. We assume that there are several potential human factors, or “topics,” that affect the personal characteristics and the personalized emotion responses to videos.

Paper Details

Authors:
Shan Wu, Shangfei Wang, Zhen Gao,
Submitted On:
13 March 2017 - 9:22pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_SHAN.pdf

(71 downloads)

Keywords

Subscribe

[1] Shan Wu, Shangfei Wang, Zhen Gao,, "Personalized video emotion tagging through a topic model", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1761. Accessed: Oct. 22, 2017.
@article{1761-17,
url = {http://sigport.org/1761},
author = {Shan Wu; Shangfei Wang; Zhen Gao; },
publisher = {IEEE SigPort},
title = {Personalized video emotion tagging through a topic model},
year = {2017} }
TY - EJOUR
T1 - Personalized video emotion tagging through a topic model
AU - Shan Wu; Shangfei Wang; Zhen Gao;
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1761
ER -
Shan Wu, Shangfei Wang, Zhen Gao,. (2017). Personalized video emotion tagging through a topic model. IEEE SigPort. http://sigport.org/1761
Shan Wu, Shangfei Wang, Zhen Gao,, 2017. Personalized video emotion tagging through a topic model. Available at: http://sigport.org/1761.
Shan Wu, Shangfei Wang, Zhen Gao,. (2017). "Personalized video emotion tagging through a topic model." Web.
1. Shan Wu, Shangfei Wang, Zhen Gao,. Personalized video emotion tagging through a topic model [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1761

Use of Affect Based Interaction Classification for Continuous Emotion Tracking


Natural and affective handshakes of two participants define the course of dyadic interaction. Affective states of the participants are expected to be correlated with the nature of the dyadic interaction. In this paper, we extract two classes of the dyadic interaction based on temporal clustering of affective states. We use the k-means temporal clustering to define the interaction classes, and utilize support vector machine based classifier to estimate the interaction class types from multimodal, speech and motion, features.

Paper Details

Authors:
Hossein Khaki, Engin Erzin
Submitted On:
3 March 2017 - 8:28am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

khaki-erzin-icassp17.pdf

(103 downloads)

Keywords

Subscribe

[1] Hossein Khaki, Engin Erzin, "Use of Affect Based Interaction Classification for Continuous Emotion Tracking", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1608. Accessed: Oct. 22, 2017.
@article{1608-17,
url = {http://sigport.org/1608},
author = {Hossein Khaki; Engin Erzin },
publisher = {IEEE SigPort},
title = {Use of Affect Based Interaction Classification for Continuous Emotion Tracking},
year = {2017} }
TY - EJOUR
T1 - Use of Affect Based Interaction Classification for Continuous Emotion Tracking
AU - Hossein Khaki; Engin Erzin
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1608
ER -
Hossein Khaki, Engin Erzin. (2017). Use of Affect Based Interaction Classification for Continuous Emotion Tracking. IEEE SigPort. http://sigport.org/1608
Hossein Khaki, Engin Erzin, 2017. Use of Affect Based Interaction Classification for Continuous Emotion Tracking. Available at: http://sigport.org/1608.
Hossein Khaki, Engin Erzin. (2017). "Use of Affect Based Interaction Classification for Continuous Emotion Tracking." Web.
1. Hossein Khaki, Engin Erzin. Use of Affect Based Interaction Classification for Continuous Emotion Tracking [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1608

A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification


Sound event detection is the task of detecting the type, starting time, and ending time of sound events in audio streams. Recently, recurrent neural networks (RNNs) have become the mainstream solution for sound event detection. Because RNNs make a prediction at every frame, it is necessary to provide exact starting and ending times of the sound events in the training data, making data annotation an extremely time-consuming process.

Paper Details

Authors:
Florian Metze
Submitted On:
27 February 2017 - 5:12pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2017.03 Poster for ICASSP.pdf

(77 downloads)

Keywords

Subscribe

[1] Florian Metze, "A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1451. Accessed: Oct. 22, 2017.
@article{1451-17,
url = {http://sigport.org/1451},
author = {Florian Metze },
publisher = {IEEE SigPort},
title = {A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification},
year = {2017} }
TY - EJOUR
T1 - A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification
AU - Florian Metze
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1451
ER -
Florian Metze. (2017). A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification. IEEE SigPort. http://sigport.org/1451
Florian Metze, 2017. A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification. Available at: http://sigport.org/1451.
Florian Metze. (2017). "A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification." Web.
1. Florian Metze. A First Attempt at Polyphonic Sound Event Detection Using Connectionist Temporal Classification [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1451

ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT


There is a big challenge in online multi-object tracking-by-detection, which caused by frequent occlusions, false alarms or miss detections and other factors. In this paper, we pro-posed an improved fast online multi-object tracking method through taking into account the results of multiple single-object trackers and detections synthetically. To solve the fixed scale problem of conventional kernelized correlation filter in single-object tracker we used, trackers are associated with de-tections based on position and size and then an adaptive mech-anism of trackers is established.

Paper Details

Authors:
Huiling Wu, Weihai Li
Submitted On:
6 December 2016 - 9:18pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT

(127 downloads)

Keywords

Subscribe

[1] Huiling Wu, Weihai Li, "ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1385. Accessed: Oct. 22, 2017.
@article{1385-16,
url = {http://sigport.org/1385},
author = {Huiling Wu; Weihai Li },
publisher = {IEEE SigPort},
title = {ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT},
year = {2016} }
TY - EJOUR
T1 - ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT
AU - Huiling Wu; Weihai Li
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1385
ER -
Huiling Wu, Weihai Li. (2016). ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT. IEEE SigPort. http://sigport.org/1385
Huiling Wu, Weihai Li, 2016. ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT. Available at: http://sigport.org/1385.
Huiling Wu, Weihai Li. (2016). "ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT." Web.
1. Huiling Wu, Weihai Li. ROBUST ONLINE MULTI-OBJECT TRACKING BASED ON KCF TRACKERS AND REASSIGNMENT [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1385

TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION


This paper presents a novel method to track the hierarchical structure of Web video groups on the basis of salient keyword matching including semantic broadness estimation. To the best of our knowledge, this paper is the first work to perform extraction and tracking of the hierarchical structure simultaneously. Specifically, the proposed method first extracts the hierarchical structure of Web video groups and salient keywords of them on the basis of an improved scheme of our previously reported method.

Paper Details

Authors:
Ryosuke Harakawa,Takahiro Ogawa,Miki Haseyama
Submitted On:
6 December 2016 - 6:46pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

harakawa_globalsip2016.pdf

(119 downloads)

Keywords

Subscribe

[1] Ryosuke Harakawa,Takahiro Ogawa,Miki Haseyama, "TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1375. Accessed: Oct. 22, 2017.
@article{1375-16,
url = {http://sigport.org/1375},
author = {Ryosuke Harakawa;Takahiro Ogawa;Miki Haseyama },
publisher = {IEEE SigPort},
title = {TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION},
year = {2016} }
TY - EJOUR
T1 - TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION
AU - Ryosuke Harakawa;Takahiro Ogawa;Miki Haseyama
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1375
ER -
Ryosuke Harakawa,Takahiro Ogawa,Miki Haseyama. (2016). TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION. IEEE SigPort. http://sigport.org/1375
Ryosuke Harakawa,Takahiro Ogawa,Miki Haseyama, 2016. TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION. Available at: http://sigport.org/1375.
Ryosuke Harakawa,Takahiro Ogawa,Miki Haseyama. (2016). "TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION." Web.
1. Ryosuke Harakawa,Takahiro Ogawa,Miki Haseyama. TRACKING HIERARCHICAL STRUCTURE OF WEB VIDEO GROUPS BASED ON SALIENT KEYWORD MATCHING INCLUDING SEMANTIC BROADNESS ESTIMATION [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1375

Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics


Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics

In this paper, we present Discriminant Correlation Analysis (DCA), a feature level fusion technique that incorporates the class associations in correlation analysis of the feature sets. DCA performs an effective feature fusion by maximizing the pair-wise correlations across the two feature sets, and at the same time, eliminating the between-class correlations and restricting the correlations to be within classes.

Paper Details

Authors:
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi
Submitted On:
16 July 2016 - 11:13pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

DCA_ICASSP16_Poster.pdf

(845 downloads)

Keywords

Additional Categories

Subscribe

[1] Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi, "Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/828. Accessed: Oct. 22, 2017.
@article{828-16,
url = {http://sigport.org/828},
author = {Mohammad Haghighat; Mohamed Abdel-Mottaleb; Wadee Alhalabi },
publisher = {IEEE SigPort},
title = {Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics},
year = {2016} }
TY - EJOUR
T1 - Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics
AU - Mohammad Haghighat; Mohamed Abdel-Mottaleb; Wadee Alhalabi
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/828
ER -
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi. (2016). Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics. IEEE SigPort. http://sigport.org/828
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi, 2016. Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics. Available at: http://sigport.org/828.
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi. (2016). "Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics." Web.
1. Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi. Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/828

Siamese Neural Network based Gait Recognition for Human Identification


The framework of our proposed Siamese neural network based gait recognition for human identification

As the remarkable characteristics of remote accessed, robust and security, gait recognition has gained significant attention in the biometrics based human identification task. However, the existed methods mainly employ the handcrafted gait features, which cannot well handle the indistinctive inter-class differences and large intra-class variations of human gait in real-world situation. In this paper, we have developed a Siamese neural network based gait recognition framework to automatically extract robust and discriminative gait features for human identification.

Paper Details

Authors:
Cheng Zhang, Wu Liu, Huadong Ma, Huiyuan Fu
Submitted On:
19 March 2016 - 8:39am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

Poster For ICASSP 2016 - Siamese Neural Network based Gait Recognition for Human Identification.pdf

(215 downloads)

Keywords

Subscribe

[1] Cheng Zhang, Wu Liu, Huadong Ma, Huiyuan Fu, "Siamese Neural Network based Gait Recognition for Human Identification", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/806. Accessed: Oct. 22, 2017.
@article{806-16,
url = {http://sigport.org/806},
author = {Cheng Zhang; Wu Liu; Huadong Ma; Huiyuan Fu },
publisher = {IEEE SigPort},
title = {Siamese Neural Network based Gait Recognition for Human Identification},
year = {2016} }
TY - EJOUR
T1 - Siamese Neural Network based Gait Recognition for Human Identification
AU - Cheng Zhang; Wu Liu; Huadong Ma; Huiyuan Fu
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/806
ER -
Cheng Zhang, Wu Liu, Huadong Ma, Huiyuan Fu. (2016). Siamese Neural Network based Gait Recognition for Human Identification. IEEE SigPort. http://sigport.org/806
Cheng Zhang, Wu Liu, Huadong Ma, Huiyuan Fu, 2016. Siamese Neural Network based Gait Recognition for Human Identification. Available at: http://sigport.org/806.
Cheng Zhang, Wu Liu, Huadong Ma, Huiyuan Fu. (2016). "Siamese Neural Network based Gait Recognition for Human Identification." Web.
1. Cheng Zhang, Wu Liu, Huadong Ma, Huiyuan Fu. Siamese Neural Network based Gait Recognition for Human Identification [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/806

Pages