
In this paper, we analyze how much, how consistent and how accurate data WaveNet-based speech synthesis method needs to be abletogeneratespeechofgoodquality. Wedothisbyaddingartificial noise to the description of our training data and observing how well WaveNet trains and produces speech. More specifically, we add noise to both phonetic segmentation and annotation accuracy, and we also reduce the size of training data by using a fewer number of sentences during training of a WaveNet model. We conducted MUSHRAlisteningtestsandusedobjectivemeasurestotrackspeech quality within the conducted experiments. We show that WaveNet retains high quality even after adding a small amount of noise (up to 10%) to phonetic segmentation and annotation. A small degradation of speech quality was observed for our WaveNet configuration when only 3 hours of training data were used.
poster.pdf

Paper Details
- Authors:
- Submitted On:
- 13 April 2018 - 4:16pm
- Short Link:
- Type:
- Poster
- Event:
- Presenter's Name:
- Jakub Vít
- Paper Code:
- 4348
- Document Year:
- 2018
- Cite
Subscribe
url = {http://sigport.org/2759},
author = {Zdeněk Hanzlíček; Jindřich Matoušek },
publisher = {IEEE SigPort},
title = {On the analysis of training data for wavenet-based speech synthesis},
year = {2018} }
T1 - On the analysis of training data for wavenet-based speech synthesis
AU - Zdeněk Hanzlíček; Jindřich Matoušek
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2759
ER -