Sorry, you need to enable JavaScript to visit this website.

SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS

Citation Author(s):
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling
Submitted by:
Yang Ai
Last updated:
13 April 2018 - 3:29am
Document Type:
Poster
Document Year:
2018
Event:
Presenters Name:
Yang Ai
Paper Code:
1671

Abstract 

Abstract: 

This paper presents a SampleRNN-based neural vocoder for statistical parametric speech synthesis. This method utilizes a conditional SampleRNN model composed of a hierarchical structure of GRU layers and feed-forward layers to capture long-span dependencies between acoustic features and waveform sequences. Compared with conventional vocoders based on the source-filter model, our proposed vocoder is trained without assumptions derived from the prior knowledge of speech production and is able to provide a better modeling and recovery of phase information. Objective and subjective evaluations are conducted on two corpora. Experimental results suggested that our proposed vocoder can achieve higher quality of synthetic speech than the STRAIGHT vocoder and a WaveNet-based neural vocoder with similar run-time efficiency, no matter natural or predicted acoustic features are used as inputs.

up
0 users have voted:

Dataset Files

ICASSP2018_poster_aiyang.pdf

(419)