Documents
Poster
Fast and Easy Crowdsourced Perceptual Audio Evaluation
- Citation Author(s):
- Submitted by:
- Mark Cartwright
- Last updated:
- 16 March 2016 - 9:44pm
- Document Type:
- Poster
- Document Year:
- 2016
- Event:
- Presenters:
- Mark Cartwright
- Categories:
- Keywords:
- Log in to post comments
Automated objective methods of audio evaluation are fast, cheap, and require little effort by the investigator. However, objective evaluation methods do not exist for the output of all audio processing algorithms, often have output that correlates poorly with human quality assessments, and require ground truth data in their calculation. Subjective human ratings of audio quality are the gold standard for many tasks, but are expensive, slow, and require a great deal of effort to recruit subjects and run listening tests. Moving listening tests from the lab to the micro-task labor market of Amazon Mechanical Turk speeds data collection and reduces investigator effort. However, it also reduces the amount of control investigators have over the testing environment, adding new variability and potential biases to the data. In this work, we compare multiple stimulus listening tests performed in a lab environment to multiple stimulus listening tests performed in web environment on a population drawn from Mechanical Turk.