Documents
Presentation Slides
Self-Supervised Denoising Autoencoder with Linear Regression Decoder for Speech Enhancement
- Citation Author(s):
- Submitted by:
- Ryandhimas Zezario
- Last updated:
- 14 May 2020 - 1:49am
- Document Type:
- Presentation Slides
- Document Year:
- 2020
- Event:
- Presenters:
- Ryandhimas Zezario
- Paper Code:
- 5519
- Categories:
- Log in to post comments
Nonlinear spectral mapping-based models based on supervised learning have successfully applied for speech enhancement. However, as supervised learning approaches, a large amount of labelled data (noisy-clean speech pairs) should be provided to train those models. In addition, their performances for unseen noisy conditions are not guaranteed, which is a common weak point of supervised learning approaches. In this study, we proposed an unsupervised learning approach for speech enhancement, i.e., denoising autoencoder with linear regression decoder (DAELD) model for speech enhancement. The DAELD is trained with noisy speech as both input and target output in a self-supervised learning manner. In addition, with properly setting a shrinkage threshold for internal hidden representations, noise could be removed during the reconstruction from the hidden representations via the linear regression decoder. Speech enhancement experiments were carried out to test the proposed model. Results confirmed that the proposed DAELD could achieve comparable and sometimes even better enhancement performance as compared to the conventional supervised speech enhancement approaches, in both seen and unseen noise environments. Moreover, we observe that higher performances tend to achieve by DAELD when the training data cover more diverse noise types and signal-to-noise-ratio (SNR) levels.