Sorry, you need to enable JavaScript to visit this website.

Multimodal signal processing

Disparity Map Estimation from Cross-modal Stereo


Mono-modal stereo matching problem has been studied for decades. The introduction of cross-modal stereo systems in industrial scene increases the interest in cross-modal stereo matching. The existing algorithms mostly consider mono-modal setting so they do not translate well in cross-modal setting. Recent development for cross-modal stereo considers small local matching and focus mainly on joint enhancement. Therefore, we propose a guided filter-based stereo matching algorithm. It works by integrating guided filter equation in a basic cost function for cost volume generation.

Paper Details

Authors:
Thapanapong Rukkanchanunt, Takashi Shibata, Masayuki Tanaka, Masatoshi Okutomi
Submitted On:
28 November 2018 - 12:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

presentation.pdf

Subscribe

[1] Thapanapong Rukkanchanunt, Takashi Shibata, Masayuki Tanaka, Masatoshi Okutomi, "Disparity Map Estimation from Cross-modal Stereo", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3819. Accessed: Mar. 22, 2019.
@article{3819-18,
url = {http://sigport.org/3819},
author = {Thapanapong Rukkanchanunt; Takashi Shibata; Masayuki Tanaka; Masatoshi Okutomi },
publisher = {IEEE SigPort},
title = {Disparity Map Estimation from Cross-modal Stereo},
year = {2018} }
TY - EJOUR
T1 - Disparity Map Estimation from Cross-modal Stereo
AU - Thapanapong Rukkanchanunt; Takashi Shibata; Masayuki Tanaka; Masatoshi Okutomi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3819
ER -
Thapanapong Rukkanchanunt, Takashi Shibata, Masayuki Tanaka, Masatoshi Okutomi. (2018). Disparity Map Estimation from Cross-modal Stereo. IEEE SigPort. http://sigport.org/3819
Thapanapong Rukkanchanunt, Takashi Shibata, Masayuki Tanaka, Masatoshi Okutomi, 2018. Disparity Map Estimation from Cross-modal Stereo. Available at: http://sigport.org/3819.
Thapanapong Rukkanchanunt, Takashi Shibata, Masayuki Tanaka, Masatoshi Okutomi. (2018). "Disparity Map Estimation from Cross-modal Stereo." Web.
1. Thapanapong Rukkanchanunt, Takashi Shibata, Masayuki Tanaka, Masatoshi Okutomi. Disparity Map Estimation from Cross-modal Stereo [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3819

CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS


Human action recognition has a wide range of applications including biometrics and surveillance. Existing methods mostly focus on a single modality, insufficient to characterize variations among different motions. To address this problem, we present a CNN-based human action recognition framework by fusing depth and skeleton modalities. The proposed Adaptive Multiscale Depth Motion Maps (AM-DMMs) are calculated from depth maps to capture shape, motion cues. Moreover, adaptive temporal windows ensure that AM-DMMs are robust to motion speed variations.

Paper Details

Authors:
Junyou He,Hailun Xia,Chunyan Feng,Yunfei Chu
Submitted On:
20 November 2018 - 5:44am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS

Subscribe

[1] Junyou He,Hailun Xia,Chunyan Feng,Yunfei Chu, "CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3692. Accessed: Mar. 22, 2019.
@article{3692-18,
url = {http://sigport.org/3692},
author = {Junyou He;Hailun Xia;Chunyan Feng;Yunfei Chu },
publisher = {IEEE SigPort},
title = {CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS},
year = {2018} }
TY - EJOUR
T1 - CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS
AU - Junyou He;Hailun Xia;Chunyan Feng;Yunfei Chu
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3692
ER -
Junyou He,Hailun Xia,Chunyan Feng,Yunfei Chu. (2018). CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS. IEEE SigPort. http://sigport.org/3692
Junyou He,Hailun Xia,Chunyan Feng,Yunfei Chu, 2018. CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS. Available at: http://sigport.org/3692.
Junyou He,Hailun Xia,Chunyan Feng,Yunfei Chu. (2018). "CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS." Web.
1. Junyou He,Hailun Xia,Chunyan Feng,Yunfei Chu. CNN-BASED ACTION RECOGNITION USING ADAPTIVE MULTISCALE DEPTH MOTION MAPS AND STABLE JOINT DISTANCE MAPS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3692

Can DNNs Learn to Lipread Full Sentences ?


Finding visual features and suitable models for lipreading tasks that are more complex than a well-constrained vocabulary has proven challenging. This paper explores state-of-the-art Deep Neural Network architectures for lipreading based on a Sequence to Sequence Recurrent Neural Network. We report results for both hand-crafted and 2D/3D Convolutional Neural Network visual front-ends, online monotonic attention, and a joint Connectionist Temporal Classification-Sequence-to-Sequence loss.

Paper Details

Authors:
George Sterpu, Christian Saam, Naomi Harte
Submitted On:
8 October 2018 - 1:50am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

slides.pdf

Subscribe

[1] George Sterpu, Christian Saam, Naomi Harte, "Can DNNs Learn to Lipread Full Sentences ?", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3608. Accessed: Mar. 22, 2019.
@article{3608-18,
url = {http://sigport.org/3608},
author = {George Sterpu; Christian Saam; Naomi Harte },
publisher = {IEEE SigPort},
title = {Can DNNs Learn to Lipread Full Sentences ?},
year = {2018} }
TY - EJOUR
T1 - Can DNNs Learn to Lipread Full Sentences ?
AU - George Sterpu; Christian Saam; Naomi Harte
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3608
ER -
George Sterpu, Christian Saam, Naomi Harte. (2018). Can DNNs Learn to Lipread Full Sentences ?. IEEE SigPort. http://sigport.org/3608
George Sterpu, Christian Saam, Naomi Harte, 2018. Can DNNs Learn to Lipread Full Sentences ?. Available at: http://sigport.org/3608.
George Sterpu, Christian Saam, Naomi Harte. (2018). "Can DNNs Learn to Lipread Full Sentences ?." Web.
1. George Sterpu, Christian Saam, Naomi Harte. Can DNNs Learn to Lipread Full Sentences ? [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3608

ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'


The apparent distance of the camera from the subject of a filmed scene, namely shot scale, is one of the prominent formal features of any filmic product, endowed with both stylistic and narrative functions. In this work we propose to use Convolutional Neural Networks for the automatic classification of shot scale into Close-, Medium-, or Long-shots.

Paper Details

Authors:
Mattia Savardi, Alberto Signoroni, Pierangelo Migliorati, and Sergio Benini
Submitted On:
5 October 2018 - 5:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP poster BENINI-TP.P5.2 (1865).pdf

Subscribe

[1] Mattia Savardi, Alberto Signoroni, Pierangelo Migliorati, and Sergio Benini, "ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3523. Accessed: Mar. 22, 2019.
@article{3523-18,
url = {http://sigport.org/3523},
author = {Mattia Savardi; Alberto Signoroni; Pierangelo Migliorati; and Sergio Benini },
publisher = {IEEE SigPort},
title = {ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'},
year = {2018} }
TY - EJOUR
T1 - ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'
AU - Mattia Savardi; Alberto Signoroni; Pierangelo Migliorati; and Sergio Benini
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3523
ER -
Mattia Savardi, Alberto Signoroni, Pierangelo Migliorati, and Sergio Benini. (2018). ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'. IEEE SigPort. http://sigport.org/3523
Mattia Savardi, Alberto Signoroni, Pierangelo Migliorati, and Sergio Benini, 2018. ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'. Available at: http://sigport.org/3523.
Mattia Savardi, Alberto Signoroni, Pierangelo Migliorati, and Sergio Benini. (2018). "ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS'." Web.
1. Mattia Savardi, Alberto Signoroni, Pierangelo Migliorati, and Sergio Benini. ICIP poster presentation Paper TP.P5.2 (1865): 'SHOT SCALE ANALYSIS IN MOVIES BY CONVOLUTIONAL NEURAL NETWORKS' [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3523

WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN


Recovering audio-visual synchronization is an important task in the field of visual speech processing.

Paper Details

Authors:
Toshiki Kikuchi, Yuko Ozasa
Submitted On:
13 April 2018 - 12:19am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation Slides

Subscribe

[1] Toshiki Kikuchi, Yuko Ozasa, "WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2585. Accessed: Mar. 22, 2019.
@article{2585-18,
url = {http://sigport.org/2585},
author = {Toshiki Kikuchi; Yuko Ozasa },
publisher = {IEEE SigPort},
title = {WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN},
year = {2018} }
TY - EJOUR
T1 - WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN
AU - Toshiki Kikuchi; Yuko Ozasa
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2585
ER -
Toshiki Kikuchi, Yuko Ozasa. (2018). WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN. IEEE SigPort. http://sigport.org/2585
Toshiki Kikuchi, Yuko Ozasa, 2018. WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN. Available at: http://sigport.org/2585.
Toshiki Kikuchi, Yuko Ozasa. (2018). "WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN." Web.
1. Toshiki Kikuchi, Yuko Ozasa. WATCH, LISTEN ONCE, AND SYNC: AUDIO-VISUAL SYNCHRONIZATION WITH MULTI-MODAL REGRESSION CNN [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2585

Bimodal Codebooks Based Adult Video Detection


Multi-modality based adult video detection is an effective approach of filtering pornography. However, existing methods lack accurate representation methods of multi-modality semantics. Addressing at the issue, we propose a novel method of bimodal codebooks based adult video detection. Firstly, the audio codebook is created by periodicity analysis from the labeled audio segments. Secondly, the visual codebook is generated by detecting regions-of-interest (ROI) on the basis of saliency analysis.

Paper Details

Authors:
Submitted On:
12 November 2017 - 4:59am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP 2017 1158 - Bimodal Codebooks Based Adult Video Detection.pdf

Keywords

Additional Categories

Subscribe

[1] , "Bimodal Codebooks Based Adult Video Detection", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2310. Accessed: Mar. 22, 2019.
@article{2310-17,
url = {http://sigport.org/2310},
author = { },
publisher = {IEEE SigPort},
title = {Bimodal Codebooks Based Adult Video Detection},
year = {2017} }
TY - EJOUR
T1 - Bimodal Codebooks Based Adult Video Detection
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2310
ER -
. (2017). Bimodal Codebooks Based Adult Video Detection. IEEE SigPort. http://sigport.org/2310
, 2017. Bimodal Codebooks Based Adult Video Detection. Available at: http://sigport.org/2310.
. (2017). "Bimodal Codebooks Based Adult Video Detection." Web.
1. . Bimodal Codebooks Based Adult Video Detection [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2310

VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS


Sentiment analysis is attracting more and more attentions and has become a very hot research topic due to its potential applications in personalized recommendation, opinion mining, etc. Most of the existing methods are based on either textual or visual data and can not achieve satisfactory results, as it is very hard to extract sufficient information from only one single modality data.

Paper Details

Authors:
Xingyue Chen,YunhongWang,Qingjie Liu
Submitted On:
14 September 2017 - 4:15am
Short Link:
Type:
Event:
Presenter's Name:

Document Files

Chen_ICIP17_DeepFusion_Slides.pdf

Subscribe

[1] Xingyue Chen,YunhongWang,Qingjie Liu, "VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2000. Accessed: Mar. 22, 2019.
@article{2000-17,
url = {http://sigport.org/2000},
author = {Xingyue Chen;YunhongWang;Qingjie Liu },
publisher = {IEEE SigPort},
title = {VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS
AU - Xingyue Chen;YunhongWang;Qingjie Liu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2000
ER -
Xingyue Chen,YunhongWang,Qingjie Liu. (2017). VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/2000
Xingyue Chen,YunhongWang,Qingjie Liu, 2017. VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/2000.
Xingyue Chen,YunhongWang,Qingjie Liu. (2017). "VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS." Web.
1. Xingyue Chen,YunhongWang,Qingjie Liu. VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2000

CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL


(1)We improved the soft attention model by introducing convolutional
operations inside the LSTM cell and attention map generation process to capture the spatial layout.
(2)We built a hierarchical two layer LSTM model for action recognition. (3)We tested our model on
three widely applied datasets, the UCF sports dataset, the Olympic dataset and the HMDB51 dataset
with improved results on other published work.

Paper Details

Authors:
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang
Submitted On:
28 August 2017 - 11:17pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

Subscribe

[1] Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang, "CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1814. Accessed: Mar. 22, 2019.
@article{1814-17,
url = {http://sigport.org/1814},
author = {Shiyang Yan; Jeremy S. Smith; Wenjin Lu; Bailing Zhang },
publisher = {IEEE SigPort},
title = {CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL},
year = {2017} }
TY - EJOUR
T1 - CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL
AU - Shiyang Yan; Jeremy S. Smith; Wenjin Lu; Bailing Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1814
ER -
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang. (2017). CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL. IEEE SigPort. http://sigport.org/1814
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang, 2017. CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL. Available at: http://sigport.org/1814.
Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang. (2017). "CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL." Web.
1. Shiyang Yan, Jeremy S. Smith, Wenjin Lu, Bailing Zhang. CHAM: ACTION RECOGNITION USING CONVOLUTIONAL HIERARCHICAL ATTENTION MODEL [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1814

EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS


The inherent dependencies among multiple physiological signals are crucial for multimodal emotion recognition, but have not been thoroughly exploited yet. This paper propose to use restricted Boltzmann machine (RBM) to model such dependencies.Specifically, the visible nodes of RBM represent EEG and peripheral physiological signals, and thus the connections between visible nodes and hidden nodes capture the intrinsic relations among multiple physiological signals. The RBM also generate new representation from multiple physiological signals.

Paper Details

Authors:
Yangyang Shu and Shangfei Wang
Submitted On:
13 March 2017 - 9:31pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_YANGYANG.pdf

Subscribe

[1] Yangyang Shu and Shangfei Wang, "EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1762. Accessed: Mar. 22, 2019.
@article{1762-17,
url = {http://sigport.org/1762},
author = {Yangyang Shu and Shangfei Wang },
publisher = {IEEE SigPort},
title = {EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS},
year = {2017} }
TY - EJOUR
T1 - EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS
AU - Yangyang Shu and Shangfei Wang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1762
ER -
Yangyang Shu and Shangfei Wang. (2017). EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS. IEEE SigPort. http://sigport.org/1762
Yangyang Shu and Shangfei Wang, 2017. EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS. Available at: http://sigport.org/1762.
Yangyang Shu and Shangfei Wang. (2017). "EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS." Web.
1. Yangyang Shu and Shangfei Wang. EMOTION RECOGNITION THROUGH INTEGRATING EEG AND PERIPHERAL SIGNALS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1762

Personalized video emotion tagging through a topic model


The inherent dependencies among video content, personal characteristics, and perceptual emotion are crucial for personalized video emotion tagging, but have not been thoroughly exploited. To address this, we propose a novel topic model to capture such inherent dependencies. We assume that there are several potential human factors, or “topics,” that affect the personal characteristics and the personalized emotion responses to videos.

Paper Details

Authors:
Shan Wu, Shangfei Wang, Zhen Gao,
Submitted On:
13 March 2017 - 9:22pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_SHAN.pdf

Subscribe

[1] Shan Wu, Shangfei Wang, Zhen Gao,, "Personalized video emotion tagging through a topic model", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1761. Accessed: Mar. 22, 2019.
@article{1761-17,
url = {http://sigport.org/1761},
author = {Shan Wu; Shangfei Wang; Zhen Gao; },
publisher = {IEEE SigPort},
title = {Personalized video emotion tagging through a topic model},
year = {2017} }
TY - EJOUR
T1 - Personalized video emotion tagging through a topic model
AU - Shan Wu; Shangfei Wang; Zhen Gao;
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1761
ER -
Shan Wu, Shangfei Wang, Zhen Gao,. (2017). Personalized video emotion tagging through a topic model. IEEE SigPort. http://sigport.org/1761
Shan Wu, Shangfei Wang, Zhen Gao,, 2017. Personalized video emotion tagging through a topic model. Available at: http://sigport.org/1761.
Shan Wu, Shangfei Wang, Zhen Gao,. (2017). "Personalized video emotion tagging through a topic model." Web.
1. Shan Wu, Shangfei Wang, Zhen Gao,. Personalized video emotion tagging through a topic model [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1761

Pages