Sorry, you need to enable JavaScript to visit this website.

Speech Processing

MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION


The paper presents a Multi-Head Attention deep learning network for Speech Emotion Recognition (SER) using Log mel-Filter Bank Energies (LFBE) spectral features as the input. The multi-head attention along with the position embedding jointly attends to information from different representations of the same LFBE input sequence. The position embedding helps in attending to the dominant emotion features by identifying positions of the features in the sequence. In addition to Multi-Head Attention and position embedding, we apply multi-task learning with gender recognition as an auxiliary task.

Paper Details

Authors:
Periyasamy Paramasivam, Promod Yenigalla
Submitted On:
21 May 2020 - 11:36pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP.pdf

(17)

Keywords

Additional Categories

Subscribe

[1] Periyasamy Paramasivam, Promod Yenigalla, "MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5430. Accessed: Jul. 04, 2020.
@article{5430-20,
url = {http://sigport.org/5430},
author = {Periyasamy Paramasivam; Promod Yenigalla },
publisher = {IEEE SigPort},
title = {MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION},
year = {2020} }
TY - EJOUR
T1 - MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION
AU - Periyasamy Paramasivam; Promod Yenigalla
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5430
ER -
Periyasamy Paramasivam, Promod Yenigalla. (2020). MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION. IEEE SigPort. http://sigport.org/5430
Periyasamy Paramasivam, Promod Yenigalla, 2020. MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION. Available at: http://sigport.org/5430.
Periyasamy Paramasivam, Promod Yenigalla. (2020). "MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION." Web.
1. Periyasamy Paramasivam, Promod Yenigalla. MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5430

DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION

Paper Details

Authors:
Swapnil Bhosale, Rupayan Chakraborty, Sunil Kumar Kopparapu
Submitted On:
20 May 2020 - 5:22am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_presentation_5598.pdf

(26)

Keywords

Additional Categories

Subscribe

[1] Swapnil Bhosale, Rupayan Chakraborty, Sunil Kumar Kopparapu, "DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5404. Accessed: Jul. 04, 2020.
@article{5404-20,
url = {http://sigport.org/5404},
author = {Swapnil Bhosale; Rupayan Chakraborty; Sunil Kumar Kopparapu },
publisher = {IEEE SigPort},
title = {DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION},
year = {2020} }
TY - EJOUR
T1 - DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION
AU - Swapnil Bhosale; Rupayan Chakraborty; Sunil Kumar Kopparapu
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5404
ER -
Swapnil Bhosale, Rupayan Chakraborty, Sunil Kumar Kopparapu. (2020). DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION. IEEE SigPort. http://sigport.org/5404
Swapnil Bhosale, Rupayan Chakraborty, Sunil Kumar Kopparapu, 2020. DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION. Available at: http://sigport.org/5404.
Swapnil Bhosale, Rupayan Chakraborty, Sunil Kumar Kopparapu. (2020). "DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION." Web.
1. Swapnil Bhosale, Rupayan Chakraborty, Sunil Kumar Kopparapu. DEEP ENCODED LINGUISTIC AND ACOUSTIC CUES FOR ATTENTION BASED END TO END SPEECH EMOTION RECOGNITION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5404

Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions


Degradation due to additive noise is a significant road block in the real-life deployment of Speech Emotion Recognition (SER) systems. Most of the previous work in this field dealt with the noise degradation either at the signal or at the feature level. In this paper, to address the robustness aspect of the SER in additive noise scenarios, we propose multi-conditioning and data augmentation using an utterance level parametric generative noise model. The generative noise model is designed to generate noise types which can span the entire noise space in the mel-filterbank energy domain.

Paper Details

Authors:
Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu
Submitted On:
20 May 2020 - 5:01am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_ppt_5701.pdf

(16)

Keywords

Additional Categories

Subscribe

[1] Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu, "Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5403. Accessed: Jul. 04, 2020.
@article{5403-20,
url = {http://sigport.org/5403},
author = {Upasana Tiwari; Meet Soni; Rupayan Chakraborty; Ashish Panda; Sunil Kumar Kopparapu },
publisher = {IEEE SigPort},
title = {Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions},
year = {2020} }
TY - EJOUR
T1 - Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions
AU - Upasana Tiwari; Meet Soni; Rupayan Chakraborty; Ashish Panda; Sunil Kumar Kopparapu
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5403
ER -
Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu. (2020). Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions. IEEE SigPort. http://sigport.org/5403
Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu, 2020. Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions. Available at: http://sigport.org/5403.
Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu. (2020). "Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions." Web.
1. Upasana Tiwari, Meet Soni, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu. Multi-Conditioning & Data Augmentation using Generative Noise Model for Speech Emotion Recognition in Noisy Conditions [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5403

Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models

Paper Details

Authors:
Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, Björn Schuller
Submitted On:
14 May 2020 - 7:07am
Short Link:
Type:

Document Files

ICASSP_slides_ZhaoRen.pdf

(16)

Subscribe

[1] Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, Björn Schuller, "Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5282. Accessed: Jul. 04, 2020.
@article{5282-20,
url = {http://sigport.org/5282},
author = {Zhao Ren; Alice Baird; Jing Han; Zixing Zhang; Björn Schuller },
publisher = {IEEE SigPort},
title = {Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models},
year = {2020} }
TY - EJOUR
T1 - Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models
AU - Zhao Ren; Alice Baird; Jing Han; Zixing Zhang; Björn Schuller
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5282
ER -
Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, Björn Schuller. (2020). Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models. IEEE SigPort. http://sigport.org/5282
Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, Björn Schuller, 2020. Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models. Available at: http://sigport.org/5282.
Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, Björn Schuller. (2020). "Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models." Web.
1. Zhao Ren, Alice Baird, Jing Han, Zixing Zhang, Björn Schuller. Generating and Protecting Against Adversarial Attacks for Deep Speech-based Emotion Recognition Models [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5282

Defense against adversarial attacks on spoofing countermeasures of ASV


Various spearheads countermeasure methods for automatic speaker verification (ASV) with considerable performance for anti-spoofing are proposed in ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are subject to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust to spoofing audio, including synthetic, converted, and replayed audios, but counter deliberately generated examples by malicious adversaries.

Paper Details

Authors:
Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee
Submitted On:
13 May 2020 - 9:21pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICASSP REPORT.pdf

(16)

Subscribe

[1] Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee, "Defense against adversarial attacks on spoofing countermeasures of ASV", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5176. Accessed: Jul. 04, 2020.
@article{5176-20,
url = {http://sigport.org/5176},
author = {Haibin Wu; Songxiang Liu; Helen Meng; Hung-yi Lee },
publisher = {IEEE SigPort},
title = {Defense against adversarial attacks on spoofing countermeasures of ASV},
year = {2020} }
TY - EJOUR
T1 - Defense against adversarial attacks on spoofing countermeasures of ASV
AU - Haibin Wu; Songxiang Liu; Helen Meng; Hung-yi Lee
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5176
ER -
Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee. (2020). Defense against adversarial attacks on spoofing countermeasures of ASV. IEEE SigPort. http://sigport.org/5176
Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee, 2020. Defense against adversarial attacks on spoofing countermeasures of ASV. Available at: http://sigport.org/5176.
Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee. (2020). "Defense against adversarial attacks on spoofing countermeasures of ASV." Web.
1. Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee. Defense against adversarial attacks on spoofing countermeasures of ASV [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5176

END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION


In this study, we focus on detecting articulatory attribute errors for dysarthric patients with cerebral palsy (CP) or amyotrophic lateral sclerosis (ALS). There are two major challenges for this task. The pronunciation of dysarthric patients is unclear and inaccurate, which results in poor performances of traditional automatic speech recognition (ASR) systems and traditional automatic speech attribute transcription (ASAT). In addition, the data is limited because of the difficulty of recording.

Paper Details

Authors:
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding
Submitted On:
18 April 2020 - 9:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_poster_lin.pdf

(37)

Subscribe

[1] Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding, "END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5105. Accessed: Jul. 04, 2020.
@article{5105-20,
url = {http://sigport.org/5105},
author = {Yuqin Lin; Longbiao Wang; Jianwu Dang; Sheng Li; Chenchen Ding },
publisher = {IEEE SigPort},
title = {END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION},
year = {2020} }
TY - EJOUR
T1 - END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION
AU - Yuqin Lin; Longbiao Wang; Jianwu Dang; Sheng Li; Chenchen Ding
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5105
ER -
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding. (2020). END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION. IEEE SigPort. http://sigport.org/5105
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding, 2020. END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION. Available at: http://sigport.org/5105.
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding. (2020). "END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION." Web.
1. Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding. END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5105

A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT


Objective metrics, such as the perceptual evaluation of speech quality (PESQ) have become standard measures for evaluating speech. These metrics enable efficient and costless evaluations, where ratings are often computed by comparing a degraded speech signal to its underlying clean reference signal. Reference-based metrics, however, cannot be used to evaluate real-world signals that have inaccessible references. This project develops a nonintrusive framework for evaluating the perceptual quality of noisy and enhanced speech.

Paper Details

Authors:
Xuan Dong, Donald Williamson
Submitted On:
21 October 2019 - 2:28pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

WASPAA.v3.pdf

(115)

Keywords

Additional Categories

Subscribe

[1] Xuan Dong, Donald Williamson, "A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4883. Accessed: Jul. 04, 2020.
@article{4883-19,
url = {http://sigport.org/4883},
author = {Xuan Dong; Donald Williamson },
publisher = {IEEE SigPort},
title = {A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT},
year = {2019} }
TY - EJOUR
T1 - A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT
AU - Xuan Dong; Donald Williamson
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4883
ER -
Xuan Dong, Donald Williamson. (2019). A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT. IEEE SigPort. http://sigport.org/4883
Xuan Dong, Donald Williamson, 2019. A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT. Available at: http://sigport.org/4883.
Xuan Dong, Donald Williamson. (2019). "A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT." Web.
1. Xuan Dong, Donald Williamson. A CLASSIFICATION-AIDED FRAMEWORK FOR NON-INTRUSIVE SPEECH QUALITY ASSESSMENT [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4883

Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech


Detection of depression from speech has attracted significant research attention in recent years but remains a challenge, particularly for speech from diverse smartphones in natural environments. This paper proposes two sets of novel features based on speech landmark bigrams associated with abrupt speech articulatory events for depression detection from smartphone audio recordings. Combined with techniques adapted from natural language text processing, the proposed features further exploit landmark bigrams by discovering latent articulatory events.

Paper Details

Authors:
Zhaocheng Huang, Julien Epps, Dale Joachim
Submitted On:
6 June 2019 - 4:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019_Huang_V01_uploaded.pdf

(179)

Subscribe

[1] Zhaocheng Huang, Julien Epps, Dale Joachim, "Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4565. Accessed: Jul. 04, 2020.
@article{4565-19,
url = {http://sigport.org/4565},
author = {Zhaocheng Huang; Julien Epps; Dale Joachim },
publisher = {IEEE SigPort},
title = {Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech},
year = {2019} }
TY - EJOUR
T1 - Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech
AU - Zhaocheng Huang; Julien Epps; Dale Joachim
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4565
ER -
Zhaocheng Huang, Julien Epps, Dale Joachim. (2019). Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech. IEEE SigPort. http://sigport.org/4565
Zhaocheng Huang, Julien Epps, Dale Joachim, 2019. Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech. Available at: http://sigport.org/4565.
Zhaocheng Huang, Julien Epps, Dale Joachim. (2019). "Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech." Web.
1. Zhaocheng Huang, Julien Epps, Dale Joachim. Speech Landmark Bigrams for Depression Detection from Naturalistic Smartphone Speech [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4565

Adversarial Speaker Adaptation


We propose a novel adversarial speaker adaptation (ASA) scheme, in which adversarial learning is applied to regularize the distribution of deep hidden features in a speaker-dependent (SD) deep neural network (DNN) acoustic model to be close to that of a fixed speaker-independent (SI) DNN acoustic model during adaptation. An additional discriminator network is introduced to distinguish the deep features generated by the SD model from those produced by the SI model.

Paper Details

Authors:
Zhong Meng, Jinyu Li, Yifan Gong
Submitted On:
12 May 2019 - 9:26pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

asa_oral_v3.pptx

(145)

Subscribe

[1] Zhong Meng, Jinyu Li, Yifan Gong, "Adversarial Speaker Adaptation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4475. Accessed: Jul. 04, 2020.
@article{4475-19,
url = {http://sigport.org/4475},
author = {Zhong Meng; Jinyu Li; Yifan Gong },
publisher = {IEEE SigPort},
title = {Adversarial Speaker Adaptation},
year = {2019} }
TY - EJOUR
T1 - Adversarial Speaker Adaptation
AU - Zhong Meng; Jinyu Li; Yifan Gong
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4475
ER -
Zhong Meng, Jinyu Li, Yifan Gong. (2019). Adversarial Speaker Adaptation. IEEE SigPort. http://sigport.org/4475
Zhong Meng, Jinyu Li, Yifan Gong, 2019. Adversarial Speaker Adaptation. Available at: http://sigport.org/4475.
Zhong Meng, Jinyu Li, Yifan Gong. (2019). "Adversarial Speaker Adaptation." Web.
1. Zhong Meng, Jinyu Li, Yifan Gong. Adversarial Speaker Adaptation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4475

Attentive Adversarial Learning for Domain-Invariant Training


Adversarial domain-invariant training (ADIT) proves to be effective in suppressing the effects of domain variability in acoustic modeling and has led to improved performance in automatic speech recognition (ASR). In ADIT, an auxiliary domain classifier takes in equally-weighted deep features from a deep neural network (DNN) acoustic model and is trained to improve their domain-invariance by optimizing an adversarial loss function.

Paper Details

Authors:
Zhong Meng, Jinyu Li, Yifan Gong
Submitted On:
12 May 2019 - 9:03pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

aadit_poster.pptx

(107)

Subscribe

[1] Zhong Meng, Jinyu Li, Yifan Gong, "Attentive Adversarial Learning for Domain-Invariant Training", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4474. Accessed: Jul. 04, 2020.
@article{4474-19,
url = {http://sigport.org/4474},
author = {Zhong Meng; Jinyu Li; Yifan Gong },
publisher = {IEEE SigPort},
title = {Attentive Adversarial Learning for Domain-Invariant Training},
year = {2019} }
TY - EJOUR
T1 - Attentive Adversarial Learning for Domain-Invariant Training
AU - Zhong Meng; Jinyu Li; Yifan Gong
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4474
ER -
Zhong Meng, Jinyu Li, Yifan Gong. (2019). Attentive Adversarial Learning for Domain-Invariant Training. IEEE SigPort. http://sigport.org/4474
Zhong Meng, Jinyu Li, Yifan Gong, 2019. Attentive Adversarial Learning for Domain-Invariant Training. Available at: http://sigport.org/4474.
Zhong Meng, Jinyu Li, Yifan Gong. (2019). "Attentive Adversarial Learning for Domain-Invariant Training." Web.
1. Zhong Meng, Jinyu Li, Yifan Gong. Attentive Adversarial Learning for Domain-Invariant Training [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4474

Pages