Sorry, you need to enable JavaScript to visit this website.

Training Generative Adversarial Network-Based Vocoder with Limited Data Using Augmentation-Conditional Discriminator

DOI:
10.60864/etw2-b234
Citation Author(s):
Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka
Submitted by:
Takuhiro Kaneko
Last updated:
6 June 2024 - 10:27am
Document Type:
Poster
Document Year:
2024
Event:
Presenters:
Takuhiro Kaneko
Paper Code:
SLP-P22.2
 

A generative adversarial network (GAN)-based vocoder trained with an adversarial discriminator is commonly used for speech synthesis because of its fast, lightweight, and high-quality characteristics. However, this data-driven model requires a large amount of training data incurring high data-collection costs. This fact motivates us to train a GAN-based vocoder on limited data. A promising solution is to augment the training data to avoid overfitting. However, a standard discriminator is unconditional and insensitive to distributional changes caused by data augmentation. Thus, augmented speech (which can be extraordinary) may be considered real speech. To address this issue, we propose an augmentation-conditional discriminator (AugCondD) that receives the augmentation state as input in addition to speech, thereby assessing input speech according to augmentation state, without inhibiting the learning of the original non-augmented distribution. Experimental results indicate that AugCondD improves speech quality under limited data conditions while achieving comparable speech quality under sufficient data conditions. Audio samples are available at https://www.kecl.ntt.co.jp/people/kaneko.takuhiro/projects/augcondd/.

up
0 users have voted: