Documents
Poster
CORN: Co-Trained Full- And No-Reference Speech Quality Assessment
- DOI:
- 10.60864/vb01-zd86
- Citation Author(s):
- Submitted by:
- Pranay Manocha
- Last updated:
- 6 June 2024 - 10:28am
- Document Type:
- Poster
- Document Year:
- 2024
- Event:
- Presenters:
- Adam Finkelstein
- Paper Code:
- AASP-P6.2
- Categories:
- Log in to post comments
Perceptual evaluation constitutes a crucial aspect of various audio-processing tasks. Full reference (FR) or similarity-based metrics rely on high-quality reference recordings, to which lower-quality or corrupted versions of the recording may be compared for evaluation. In contrast, no-reference (NR) metrics evaluate a recording without relying on a reference. Both the FR and NR approaches exhibit advantages and drawbacks relative to each other. In this paper, we present a novel framework called CORN that amalgamates these dual approaches, concurrently training both FR and NR models together. After training, the models can be applied independently. We evaluate CORN by predicting several common objective metrics and across two different architectures. The NR model trained using CORN has access to a reference recording during training, and thus, as one would expect, it consistently outperforms baseline NR models trained independently. Perhaps even more remarkable is that the CORN FR model also outperforms its baseline counterpart, even though it relies on the same training data and the same model architecture. Thus, a single training regime produces two independently useful models, each outperforming independently trained models