Documents
Poster
LATENT REPRESENTATION LEARNING FOR ARTIFICIAL BANDWIDTH EXTENSION USING A CONDITIONAL VARIATIONAL AUTO-ENCODER
- Citation Author(s):
- Submitted by:
- Pramod Bachhav
- Last updated:
- 7 May 2019 - 1:32pm
- Document Type:
- Poster
- Document Year:
- 2019
- Event:
- Presenters:
- Pramod Bachhav
- Paper Code:
- 4583
- Categories:
- Log in to post comments
Artificial bandwidth extension (ABE) algorithms can improve speech quality when wideband devices are used with narrowband
devices or infrastructure. Most ABE solutions employ some form of memory, implying high-dimensional feature representations that increase both latency and complexity. Dimensionality reduction techniques have thus been developed to preserve efficiency. These entail the extraction of compact, low-dimensional representations that are then used with a standard regression model to estimate high-band components. Previous work shows that some form of supervision is crucial to the optimisation of dimensionality reduction techniques for ABE. This paper reports the first application of conditional variational auto-encoders (CVAEs) for supervised dimensionality reduction specifically tailored to ABE. CVAEs, form of directed, graphical models, are exploited to model higher-dimensional log-spectral data to extract the latent narrowband representations. When compared to results obtained with alternative dimensionality reduction techniques, objective and subjective assessments show that the
probabilistic latent representations learned with CVAEs produce bandwidth-extended speech signals of notably better quality.