Sorry, you need to enable JavaScript to visit this website.

Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence

Citation Author(s):
Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook
Submitted by:
Sanna Wager
Last updated:
10 May 2019 - 3:28pm
Document Type:
Poster
Document Year:
2019
Event:
Presenters:
Sanna Wager
Paper Code:
3434
 

We introduce the "Intonation" dataset of amateur vocal performances with a tendency for good intonation, collected from Smule, Inc. The dataset can be used for music information retrieval tasks such as autotuning, query by humming, and singing style analysis. It is available upon request on the Stanford CCRMA DAMP website. We describe a semi-supervised approach to selecting the audio recordings from a larger collection of performances based on intonation patterns. The approach can be applied in other situations where a researcher needs to extract a subset of data samples from a large database. A comparison of the "Intonation" dataset and the remaining collection of performances shows that the two have different intonation behavior distributions.

up
0 users have voted: