Sorry, you need to enable JavaScript to visit this website.

On the analysis of training data for wavenet-based speech synthesis

Citation Author(s):
Zdeněk Hanzlíček, Jindřich Matoušek
Submitted by:
Jakub Vit
Last updated:
13 April 2018 - 4:16pm
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Jakub Vít
Paper Code:
4348
 

In this paper, we analyze how much, how consistent and how accurate data WaveNet-based speech synthesis method needs to be abletogeneratespeechofgoodquality. Wedothisbyaddingartificial noise to the description of our training data and observing how well WaveNet trains and produces speech. More specifically, we add noise to both phonetic segmentation and annotation accuracy, and we also reduce the size of training data by using a fewer number of sentences during training of a WaveNet model. We conducted MUSHRAlisteningtestsandusedobjectivemeasurestotrackspeech quality within the conducted experiments. We show that WaveNet retains high quality even after adding a small amount of noise (up to 10%) to phonetic segmentation and annotation. A small degradation of speech quality was observed for our WaveNet configuration when only 3 hours of training data were used.

up
0 users have voted: