Sorry, you need to enable JavaScript to visit this website.

Spoken Language Understanding (SLP-UNDE)

Multimodal One-shot Learning of Speech and Images


Image a robot is shown new concepts visually together with spoken tags, e.g. "milk", "eggs", "butter". After seeing one paired audiovisual example per class, it is shown a new set of unseen instances of these objects, and asked to pick the "milk". Without receiving any hard labels, could it learn to match the new continuous speech input to the correct visual instance? Although unimodal one-shot learning has been studied, where one labelled example in a single modality is given per class, this example motivates multimodal one-shot learning.

Paper Details

Authors:
Ryan Eloff, Herman A. Engelbrecht, Herman Kamper
Submitted On:
10 May 2019 - 6:38pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2019_poster_multimodal_oneshot

(32)

Keywords

Additional Categories

Subscribe

[1] Ryan Eloff, Herman A. Engelbrecht, Herman Kamper, "Multimodal One-shot Learning of Speech and Images", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4421. Accessed: Oct. 16, 2019.
@article{4421-19,
url = {http://sigport.org/4421},
author = {Ryan Eloff; Herman A. Engelbrecht; Herman Kamper },
publisher = {IEEE SigPort},
title = {Multimodal One-shot Learning of Speech and Images},
year = {2019} }
TY - EJOUR
T1 - Multimodal One-shot Learning of Speech and Images
AU - Ryan Eloff; Herman A. Engelbrecht; Herman Kamper
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4421
ER -
Ryan Eloff, Herman A. Engelbrecht, Herman Kamper. (2019). Multimodal One-shot Learning of Speech and Images. IEEE SigPort. http://sigport.org/4421
Ryan Eloff, Herman A. Engelbrecht, Herman Kamper, 2019. Multimodal One-shot Learning of Speech and Images. Available at: http://sigport.org/4421.
Ryan Eloff, Herman A. Engelbrecht, Herman Kamper. (2019). "Multimodal One-shot Learning of Speech and Images." Web.
1. Ryan Eloff, Herman A. Engelbrecht, Herman Kamper. Multimodal One-shot Learning of Speech and Images [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4421

Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions


This paper presents our latest investigations on dialog act (DA) classification on automatically generated transcriptions. We propose a novel approach that combines convolutional neural networks (CNNs) and conditional random fields (CRFs) for context modeling in DA classification. We explore the impact of transcriptions generated from different automatic speech recognition systems such as hybrid TDNN/HMM and End-to-End systems on the final performance. Experimental results on two benchmark datasets (MRDA and SwDA) show that the combination CNN and CRF improves consistently the accuracy.

Paper Details

Authors:
Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu
Submitted On:
10 May 2019 - 6:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_2019_Ortega_poster.pdf

(36)

Subscribe

[1] Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu, "Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4301. Accessed: Oct. 16, 2019.
@article{4301-19,
url = {http://sigport.org/4301},
author = {Daniel Ortega; Chia-Yu Li; Gisela Vallejo; Pavel Denisov; Ngoc Thang Vu },
publisher = {IEEE SigPort},
title = {Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions},
year = {2019} }
TY - EJOUR
T1 - Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions
AU - Daniel Ortega; Chia-Yu Li; Gisela Vallejo; Pavel Denisov; Ngoc Thang Vu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4301
ER -
Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu. (2019). Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions. IEEE SigPort. http://sigport.org/4301
Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu, 2019. Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions. Available at: http://sigport.org/4301.
Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu. (2019). "Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions." Web.
1. Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu. Context-aware Neural-based Dialog Act Classification On Automatically Generated Transcriptions [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4301

QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING


This paper presents a question answering (QA) system developed for spoken lecture processing. The questions are presented to the system in written form and the answers are returned from lecture videos. In contrast to the widely studied reading comprehension style QA – the machine understands a passage of text and answers the questions related to that passage – our task introduces the challenge of searching the answers on longer text where the text corresponds to the erroneous transcripts of the lecture videos.

Paper Details

Authors:
Merve Unlu, Ebru Arisoy, Murat Saraclar
Submitted On:
9 May 2019 - 5:35pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019merve_poster.pdf

(34)

Subscribe

[1] Merve Unlu, Ebru Arisoy, Murat Saraclar, "QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4243. Accessed: Oct. 16, 2019.
@article{4243-19,
url = {http://sigport.org/4243},
author = {Merve Unlu; Ebru Arisoy; Murat Saraclar },
publisher = {IEEE SigPort},
title = {QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING},
year = {2019} }
TY - EJOUR
T1 - QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING
AU - Merve Unlu; Ebru Arisoy; Murat Saraclar
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4243
ER -
Merve Unlu, Ebru Arisoy, Murat Saraclar. (2019). QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING. IEEE SigPort. http://sigport.org/4243
Merve Unlu, Ebru Arisoy, Murat Saraclar, 2019. QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING. Available at: http://sigport.org/4243.
Merve Unlu, Ebru Arisoy, Murat Saraclar. (2019). "QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING." Web.
1. Merve Unlu, Ebru Arisoy, Murat Saraclar. QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4243

QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING


This paper presents a question answering (QA) system developed for spoken lecture processing. The questions are presented to the system in written form and the answers are returned from lecture videos. In contrast to the widely studied reading comprehension style QA – the machine understands a passage of text and answers the questions related to that passage – our task introduces the challenge of searching the answers on longer text where the text corresponds to the erroneous transcripts of the lecture videos.

Paper Details

Authors:
Merve Unlu, Ebru Arisoy, Murat Saraclar
Submitted On:
9 May 2019 - 5:35pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019merve_poster.pdf

(39)

Subscribe

[1] Merve Unlu, Ebru Arisoy, Murat Saraclar, "QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4242. Accessed: Oct. 16, 2019.
@article{4242-19,
url = {http://sigport.org/4242},
author = {Merve Unlu; Ebru Arisoy; Murat Saraclar },
publisher = {IEEE SigPort},
title = {QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING},
year = {2019} }
TY - EJOUR
T1 - QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING
AU - Merve Unlu; Ebru Arisoy; Murat Saraclar
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4242
ER -
Merve Unlu, Ebru Arisoy, Murat Saraclar. (2019). QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING. IEEE SigPort. http://sigport.org/4242
Merve Unlu, Ebru Arisoy, Murat Saraclar, 2019. QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING. Available at: http://sigport.org/4242.
Merve Unlu, Ebru Arisoy, Murat Saraclar. (2019). "QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING." Web.
1. Merve Unlu, Ebru Arisoy, Murat Saraclar. QUESTION ANSWERING FOR SPOKEN LECTURE PROCESSING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4242

REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION

Paper Details

Authors:
Dehua Tao, Guangyan Zhang, P. C. Ching and Tan Lee
Submitted On:
9 May 2019 - 2:14am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP2019_Poster_symao.pdf

(33)

Subscribe

[1] Dehua Tao, Guangyan Zhang, P. C. Ching and Tan Lee, "REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4152. Accessed: Oct. 16, 2019.
@article{4152-19,
url = {http://sigport.org/4152},
author = {Dehua Tao; Guangyan Zhang; P. C. Ching and Tan Lee },
publisher = {IEEE SigPort},
title = {REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION},
year = {2019} }
TY - EJOUR
T1 - REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION
AU - Dehua Tao; Guangyan Zhang; P. C. Ching and Tan Lee
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4152
ER -
Dehua Tao, Guangyan Zhang, P. C. Ching and Tan Lee. (2019). REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION. IEEE SigPort. http://sigport.org/4152
Dehua Tao, Guangyan Zhang, P. C. Ching and Tan Lee, 2019. REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION. Available at: http://sigport.org/4152.
Dehua Tao, Guangyan Zhang, P. C. Ching and Tan Lee. (2019). "REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION." Web.
1. Dehua Tao, Guangyan Zhang, P. C. Ching and Tan Lee. REVISITING HIDDEN MARKOV MODELS FOR SPEECH EMOTION RECOGNITION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4152

USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING

Paper Details

Authors:
Richard Tzong-Han Tsai, Chia-Hao Chen, Chun-Kai Wu, Yu-Cheng Hsiao, Hung-Yi Lee
Submitted On:
11 April 2019 - 4:05am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP 2019#2338.pdf

(70)

Subscribe

[1] Richard Tzong-Han Tsai, Chia-Hao Chen, Chun-Kai Wu, Yu-Cheng Hsiao, Hung-Yi Lee, "USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3888. Accessed: Oct. 16, 2019.
@article{3888-19,
url = {http://sigport.org/3888},
author = {Richard Tzong-Han Tsai; Chia-Hao Chen; Chun-Kai Wu; Yu-Cheng Hsiao; Hung-Yi Lee },
publisher = {IEEE SigPort},
title = {USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING},
year = {2019} }
TY - EJOUR
T1 - USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING
AU - Richard Tzong-Han Tsai; Chia-Hao Chen; Chun-Kai Wu; Yu-Cheng Hsiao; Hung-Yi Lee
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3888
ER -
Richard Tzong-Han Tsai, Chia-Hao Chen, Chun-Kai Wu, Yu-Cheng Hsiao, Hung-Yi Lee. (2019). USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING. IEEE SigPort. http://sigport.org/3888
Richard Tzong-Han Tsai, Chia-Hao Chen, Chun-Kai Wu, Yu-Cheng Hsiao, Hung-Yi Lee, 2019. USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING. Available at: http://sigport.org/3888.
Richard Tzong-Han Tsai, Chia-Hao Chen, Chun-Kai Wu, Yu-Cheng Hsiao, Hung-Yi Lee. (2019). "USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING." Web.
1. Richard Tzong-Han Tsai, Chia-Hao Chen, Chun-Kai Wu, Yu-Cheng Hsiao, Hung-Yi Lee. USING DEEP-Q NETWORK TO SELECT CANDIDATES FROM N-BEST SPEECH RECOGNITION HYPOTHESES FOR ENHANCING DIALOGUE STATE TRACKING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3888

Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning


This paper presents a new method --- adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts.

Paper Details

Authors:
Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong
Submitted On:
20 April 2018 - 12:23pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icassp2018_v2.pptx

(189)

Subscribe

[1] Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong, "Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3110. Accessed: Oct. 16, 2019.
@article{3110-18,
url = {http://sigport.org/3110},
author = {Baolin Peng; Xiujun Li; Jianfeng Gao; Jingjing Liu; Yun-Nung Chen; Kam-Fai Wong },
publisher = {IEEE SigPort},
title = {Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning},
year = {2018} }
TY - EJOUR
T1 - Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning
AU - Baolin Peng; Xiujun Li; Jianfeng Gao; Jingjing Liu; Yun-Nung Chen; Kam-Fai Wong
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3110
ER -
Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong. (2018). Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning. IEEE SigPort. http://sigport.org/3110
Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong, 2018. Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning. Available at: http://sigport.org/3110.
Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong. (2018). "Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning." Web.
1. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong. Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3110

AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION


Social signals such as laughter and fillers are often observed in natural conversation, and they play various roles in human-to-human communication. Detecting these events is useful for transcription systems to generate rich transcription and for dialogue systems to behave as we do such as synchronized laughing or attentive listening. We have studied an end-to-end approach to directly detect social signals from speech by using connectionist temporal classification (CTC), which is one of the end-to-end sequence labelling models.

Paper Details

Authors:
Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara
Submitted On:
17 April 2018 - 7:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

201804_ICASSP2018_poster.pdf

(186)

Subscribe

[1] Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara, "AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2947. Accessed: Oct. 16, 2019.
@article{2947-18,
url = {http://sigport.org/2947},
author = {Hirofumi Inaguma; Masato Mimura; Koji Inoue; Kazuyoshi Yoshii; Tatsuya Kawahara },
publisher = {IEEE SigPort},
title = {AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION},
year = {2018} }
TY - EJOUR
T1 - AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION
AU - Hirofumi Inaguma; Masato Mimura; Koji Inoue; Kazuyoshi Yoshii; Tatsuya Kawahara
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2947
ER -
Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara. (2018). AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/2947
Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara, 2018. AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION. Available at: http://sigport.org/2947.
Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara. (2018). "AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION." Web.
1. Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara. AN END-TO-END APPROACH TO JOINT SOCIAL SIGNAL DETECTION AND AUTOMATIC SPEECH RECOGNITION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2947

Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling

Paper Details

Authors:
Submitted On:
19 April 2018 - 3:10pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

schumann_icassp_presentation.pdf

(177)

schumann_icassp_presentation.pdf

(163)

Subscribe

[1] , "Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2903. Accessed: Oct. 16, 2019.
@article{2903-18,
url = {http://sigport.org/2903},
author = { },
publisher = {IEEE SigPort},
title = {Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling},
year = {2018} }
TY - EJOUR
T1 - Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2903
ER -
. (2018). Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling. IEEE SigPort. http://sigport.org/2903
, 2018. Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling. Available at: http://sigport.org/2903.
. (2018). "Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling." Web.
1. . Incorporating ASR Errors with Attention-based, Jointly Trained RNN for Intent Detection and Slot Filling [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2903

ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION

Paper Details

Authors:
Genta Indra Winata, Onno Pepijn Kampman, Pascale Fung
Submitted On:
14 April 2018 - 8:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

attention-based-lstm-poster.pdf

(157)

Subscribe

[1] Genta Indra Winata, Onno Pepijn Kampman, Pascale Fung, "ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2829. Accessed: Oct. 16, 2019.
@article{2829-18,
url = {http://sigport.org/2829},
author = {Genta Indra Winata; Onno Pepijn Kampman; Pascale Fung },
publisher = {IEEE SigPort},
title = {ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION},
year = {2018} }
TY - EJOUR
T1 - ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION
AU - Genta Indra Winata; Onno Pepijn Kampman; Pascale Fung
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2829
ER -
Genta Indra Winata, Onno Pepijn Kampman, Pascale Fung. (2018). ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION. IEEE SigPort. http://sigport.org/2829
Genta Indra Winata, Onno Pepijn Kampman, Pascale Fung, 2018. ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION. Available at: http://sigport.org/2829.
Genta Indra Winata, Onno Pepijn Kampman, Pascale Fung. (2018). "ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION." Web.
1. Genta Indra Winata, Onno Pepijn Kampman, Pascale Fung. ATTENTION-BASED LSTM FOR PSYCHOLOGICAL STRESS DETECTION FROM SPOKEN LANGUAGE USING DISTANT SUPERVISION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2829

Pages