Sorry, you need to enable JavaScript to visit this website.

ICASSP 2020

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The ICASSP 2020 conference will feature world-class presentations by internationally renowned speakers, cutting-edge session topics and provide a fantastic opportunity to network with like-minded professionals from around the world. Visit website.

Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)


State-of-the-art hearing aids (HA) are limited in recognizing acoustic environments. Much effort is spent on research to improve listening experience for HA users in every acoustic situation. There is, however, no dedicated public database to train acoustic environment recognition algorithms with a specific focus on HA applications accounting for their requirements. Existing acoustic scene classification databases are inappropriate for HA signal processing.

Paper Details

Authors:
Kamil Adiloğlu, Jörg-Hendrik Bach
Submitted On:
18 May 2020 - 7:01am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

https://download.hoertech.de/hear-ds-data/HEAR-DS/RawAudioCuts/doc/icassp2020-hear-ds-presentation-huewel.mp4

(40)

Subscribe

[1] Kamil Adiloğlu, Jörg-Hendrik Bach, "Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5392. Accessed: Aug. 12, 2020.
@article{5392-20,
url = {http://sigport.org/5392},
author = {Kamil Adiloğlu; Jörg-Hendrik Bach },
publisher = {IEEE SigPort},
title = {Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)},
year = {2020} }
TY - EJOUR
T1 - Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)
AU - Kamil Adiloğlu; Jörg-Hendrik Bach
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5392
ER -
Kamil Adiloğlu, Jörg-Hendrik Bach. (2020). Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS). IEEE SigPort. http://sigport.org/5392
Kamil Adiloğlu, Jörg-Hendrik Bach, 2020. Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS). Available at: http://sigport.org/5392.
Kamil Adiloğlu, Jörg-Hendrik Bach. (2020). "Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)." Web.
1. Kamil Adiloğlu, Jörg-Hendrik Bach. Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS) [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5392

A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES


Unlike phoneme sequences, movements of speech articulators (lips, tongue, jaw, velum) and the resultant acoustic signal are known to encode not only the linguistic message but also carry para-linguistic information. While several works exist for estimating articulatory movement from acoustic signals, little is known to what extent articulatory movements can be predicted only from linguistic information, i.e., phoneme sequence.

Paper Details

Authors:
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh
Submitted On:
26 May 2020 - 5:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation slides

(22)

Subscribe

[1] Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh, "A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5391. Accessed: Aug. 12, 2020.
@article{5391-20,
url = {http://sigport.org/5391},
author = {Abhayjeet Singh; Aravind Illa; Prasanta Kumar Ghosh },
publisher = {IEEE SigPort},
title = {A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES},
year = {2020} }
TY - EJOUR
T1 - A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES
AU - Abhayjeet Singh; Aravind Illa; Prasanta Kumar Ghosh
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5391
ER -
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh. (2020). A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES. IEEE SigPort. http://sigport.org/5391
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh, 2020. A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES. Available at: http://sigport.org/5391.
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh. (2020). "A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES." Web.
1. Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh. A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5391

SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION


The process of spoken language acquisition has been one of the topics which attract the greatest interesting from linguists for decades. By utilizing modern machine learning techniques, we simulated this process on computers, which helps to understand the mystery behind the process, and enable new possibilities of applying this concept on, but not limited to, intelligent robots. This paper proposes a new framework for simulating spoken language acquisition by combining reinforcement learning and unsupervised learning methods.

Paper Details

Authors:
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki
Submitted On:
18 May 2020 - 3:34am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2020_icassp_slacquisition_edit_hou.pdf

(55)

Subscribe

[1] Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki, "SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5390. Accessed: Aug. 12, 2020.
@article{5390-20,
url = {http://sigport.org/5390},
author = {Shengzhou Gao; Wenxin Hou; Tomohiro Tanaka; Takahiro Shinozaki },
publisher = {IEEE SigPort},
title = {SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION},
year = {2020} }
TY - EJOUR
T1 - SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION
AU - Shengzhou Gao; Wenxin Hou; Tomohiro Tanaka; Takahiro Shinozaki
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5390
ER -
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki. (2020). SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION. IEEE SigPort. http://sigport.org/5390
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki, 2020. SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION. Available at: http://sigport.org/5390.
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki. (2020). "SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION." Web.
1. Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki. SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5390

Learning to rank music tracks using triplet loss


Most music streaming services rely on automatic recommendation algorithms to exploit their large music catalogs. These algorithms aim at retrieving a ranked list of music tracks based on their similarity with a target music track. In this work, we propose a method for direct recommendation based on the audio content without explicitly tagging the music tracks. To that aim, we propose several strategies to perform triplet mining from ranked lists. We train a Convolutional Neural Network to learn the similarity via triplet loss.

Paper Details

Authors:
Laure Pretet, Gael Richard, Geoffroy Peeters
Submitted On:
18 May 2020 - 3:27am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp.pdf

(58)

Subscribe

[1] Laure Pretet, Gael Richard, Geoffroy Peeters, "Learning to rank music tracks using triplet loss", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5389. Accessed: Aug. 12, 2020.
@article{5389-20,
url = {http://sigport.org/5389},
author = {Laure Pretet; Gael Richard; Geoffroy Peeters },
publisher = {IEEE SigPort},
title = {Learning to rank music tracks using triplet loss},
year = {2020} }
TY - EJOUR
T1 - Learning to rank music tracks using triplet loss
AU - Laure Pretet; Gael Richard; Geoffroy Peeters
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5389
ER -
Laure Pretet, Gael Richard, Geoffroy Peeters. (2020). Learning to rank music tracks using triplet loss. IEEE SigPort. http://sigport.org/5389
Laure Pretet, Gael Richard, Geoffroy Peeters, 2020. Learning to rank music tracks using triplet loss. Available at: http://sigport.org/5389.
Laure Pretet, Gael Richard, Geoffroy Peeters. (2020). "Learning to rank music tracks using triplet loss." Web.
1. Laure Pretet, Gael Richard, Geoffroy Peeters. Learning to rank music tracks using triplet loss [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5389

Weighted Speech Distortion Losses for Real-time Speech Enhancement


This paper investigates several aspects of training a RNN (recurrent neural network) that impact the objective and subjective quality of enhanced speech for real-time single-channel speech enhancement. Specifically, we focus on a RNN that enhances short-time speech spectra on a single-frame-in, single-frame-out basis, a framework adopted by most classical signal processing methods. We propose two novel mean-squared-error-based learning objectives that enable separate control over the importance of speech distortion versus noise reduction.

Paper Details

Authors:
Ivan Tashev
Submitted On:
17 May 2020 - 7:34pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

dns-public-v2short.pptx

(51)

Subscribe

[1] Ivan Tashev, "Weighted Speech Distortion Losses for Real-time Speech Enhancement", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5388. Accessed: Aug. 12, 2020.
@article{5388-20,
url = {http://sigport.org/5388},
author = {Ivan Tashev },
publisher = {IEEE SigPort},
title = {Weighted Speech Distortion Losses for Real-time Speech Enhancement},
year = {2020} }
TY - EJOUR
T1 - Weighted Speech Distortion Losses for Real-time Speech Enhancement
AU - Ivan Tashev
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5388
ER -
Ivan Tashev. (2020). Weighted Speech Distortion Losses for Real-time Speech Enhancement. IEEE SigPort. http://sigport.org/5388
Ivan Tashev, 2020. Weighted Speech Distortion Losses for Real-time Speech Enhancement. Available at: http://sigport.org/5388.
Ivan Tashev. (2020). "Weighted Speech Distortion Losses for Real-time Speech Enhancement." Web.
1. Ivan Tashev. Weighted Speech Distortion Losses for Real-time Speech Enhancement [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5388

Clock synchronization over networks using sawtooth models


Clock synchronization and ranging over a wireless network with low communication overhead is a challenging goal with tremendous impact. In this paper, we study the use of time-to-digital converters in wireless sensors, which provides clock synchronization and ranging at negligible communication overhead through a sawtooth signal model for round trip times between two nodes.

Paper Details

Authors:
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén
Submitted On:
17 May 2020 - 4:06pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Clock_synchronization_over_networks_using_sawtooth_models.pdf

(28)

Subscribe

[1] Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén, "Clock synchronization over networks using sawtooth models", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5387. Accessed: Aug. 12, 2020.
@article{5387-20,
url = {http://sigport.org/5387},
author = {Pol del Aguila Pla; Lissy Pellaco; Satyam Dwivedi; Peter Händel and Joakim Jaldén },
publisher = {IEEE SigPort},
title = {Clock synchronization over networks using sawtooth models},
year = {2020} }
TY - EJOUR
T1 - Clock synchronization over networks using sawtooth models
AU - Pol del Aguila Pla; Lissy Pellaco; Satyam Dwivedi; Peter Händel and Joakim Jaldén
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5387
ER -
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén. (2020). Clock synchronization over networks using sawtooth models. IEEE SigPort. http://sigport.org/5387
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén, 2020. Clock synchronization over networks using sawtooth models. Available at: http://sigport.org/5387.
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén. (2020). "Clock synchronization over networks using sawtooth models." Web.
1. Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén. Clock synchronization over networks using sawtooth models [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5387

Attention based Curiosity-driven Exploration in Deep Reinforcement Learning


Reinforcement Learning enables to train an agent via interaction with the environment. However, in the majority of real-world scenarios, the extrinsic feedback is sparse or not sufficient, thus intrinsic reward formulations are needed to successfully train the agent. This work investigates and extends the paradigm of curiosity-driven exploration. First, a probabilistic approach is taken to exploit the advantages of the attention mechanism, which is successfully applied in other domains of Deep Learning.

Paper Details

Authors:
Patrik Reizinger, Márton Szemenyei
Submitted On:
17 May 2020 - 3:59pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_presentation.pdf

(32)

Keywords

Additional Categories

Subscribe

[1] Patrik Reizinger, Márton Szemenyei, "Attention based Curiosity-driven Exploration in Deep Reinforcement Learning", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5386. Accessed: Aug. 12, 2020.
@article{5386-20,
url = {http://sigport.org/5386},
author = {Patrik Reizinger; Márton Szemenyei },
publisher = {IEEE SigPort},
title = {Attention based Curiosity-driven Exploration in Deep Reinforcement Learning},
year = {2020} }
TY - EJOUR
T1 - Attention based Curiosity-driven Exploration in Deep Reinforcement Learning
AU - Patrik Reizinger; Márton Szemenyei
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5386
ER -
Patrik Reizinger, Márton Szemenyei. (2020). Attention based Curiosity-driven Exploration in Deep Reinforcement Learning. IEEE SigPort. http://sigport.org/5386
Patrik Reizinger, Márton Szemenyei, 2020. Attention based Curiosity-driven Exploration in Deep Reinforcement Learning. Available at: http://sigport.org/5386.
Patrik Reizinger, Márton Szemenyei. (2020). "Attention based Curiosity-driven Exploration in Deep Reinforcement Learning." Web.
1. Patrik Reizinger, Márton Szemenyei. Attention based Curiosity-driven Exploration in Deep Reinforcement Learning [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5386

Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations

Paper Details

Authors:
Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan
Submitted On:
17 May 2020 - 3:10pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP 2020 presentation

(26)

Subscribe

[1] Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan, "Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5385. Accessed: Aug. 12, 2020.
@article{5385-20,
url = {http://sigport.org/5385},
author = {Jiaxi Wang; Karel Mundnich; Allison T. Knoll; Pat Levitt; Shrikanth Narayanan },
publisher = {IEEE SigPort},
title = {Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations},
year = {2020} }
TY - EJOUR
T1 - Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations
AU - Jiaxi Wang; Karel Mundnich; Allison T. Knoll; Pat Levitt; Shrikanth Narayanan
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5385
ER -
Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan. (2020). Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations. IEEE SigPort. http://sigport.org/5385
Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan, 2020. Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations. Available at: http://sigport.org/5385.
Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan. (2020). "Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations." Web.
1. Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan. Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5385

Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors


Deep reinforcement learning (DRL) is able to learn control policies for many complicated tasks, but it’s power has not been unleashed to handle multi-agent circumstances. Independent learning, where each agent treats others as part of the environment and learns its own policy without considering others’ policies is a simple way to apply DRL to multi-agent tasks. However, since agents’ policies change as learning proceeds, from the perspective of each agent, the environment is non-stationary, which makes conventional DRL methods inefficient.

Paper Details

Authors:
Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang, Chao Wang
Submitted On:
17 May 2020 - 8:31am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

stabilizing_madrl_by_implicitly_estimating_other_agents'_behaviors.pdf

(24)

Subscribe

[1] Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang, Chao Wang, "Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5384. Accessed: Aug. 12, 2020.
@article{5384-20,
url = {http://sigport.org/5384},
author = {Yue Jin; Shuangqing Wei; Jian Yuan; Xudong Zhang; Chao Wang },
publisher = {IEEE SigPort},
title = {Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors},
year = {2020} }
TY - EJOUR
T1 - Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors
AU - Yue Jin; Shuangqing Wei; Jian Yuan; Xudong Zhang; Chao Wang
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5384
ER -
Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang, Chao Wang. (2020). Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors. IEEE SigPort. http://sigport.org/5384
Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang, Chao Wang, 2020. Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors. Available at: http://sigport.org/5384.
Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang, Chao Wang. (2020). "Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors." Web.
1. Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang, Chao Wang. Stabilizing Multi agent Deep Reinforcement Learning by Implicitly Estimating Other Agents’ Behaviors [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5384

Synchronous Transformers for End-to-End Speech Recognition

Paper Details

Authors:
Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen
Submitted On:
17 May 2020 - 3:20am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Sync-Transformer-icassp2020.pdf

(33)

Subscribe

[1] Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen, "Synchronous Transformers for End-to-End Speech Recognition", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5382. Accessed: Aug. 12, 2020.
@article{5382-20,
url = {http://sigport.org/5382},
author = {Zhengkun Tian; Jiangyan Yi; Ye Bai; Jianhua Tao; Shuai Zhang; Zhengqi Wen },
publisher = {IEEE SigPort},
title = {Synchronous Transformers for End-to-End Speech Recognition},
year = {2020} }
TY - EJOUR
T1 - Synchronous Transformers for End-to-End Speech Recognition
AU - Zhengkun Tian; Jiangyan Yi; Ye Bai; Jianhua Tao; Shuai Zhang; Zhengqi Wen
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5382
ER -
Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen. (2020). Synchronous Transformers for End-to-End Speech Recognition. IEEE SigPort. http://sigport.org/5382
Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen, 2020. Synchronous Transformers for End-to-End Speech Recognition. Available at: http://sigport.org/5382.
Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen. (2020). "Synchronous Transformers for End-to-End Speech Recognition." Web.
1. Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen. Synchronous Transformers for End-to-End Speech Recognition [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5382

Pages