Sorry, you need to enable JavaScript to visit this website.

Deep ranking: triplet matchnet for music metric learning

Citation Author(s):
Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang
Submitted by:
Rui Lu
Last updated:
2 March 2017 - 2:56am
Document Type:
Presentation Slides
Document Year:
2017
Event:
Presenters:
Rui Lu
Paper Code:
1205
 

Metric learning for music is an important problem for many music information retrieval (MIR) applications such as music generation, analysis, retrieval, classification and recommendation. Traditional music metrics are mostly defined on linear transformations of handcrafted audio features, and may be improper in many situations given the large variety of mu- sic styles and instrumentations. In this paper, we propose a deep neural network named Triplet MatchNet to learn metrics directly from raw audio signals of triplets of music excerpts with human-annotated relative similarity in a supervised fashion. It has the advantage of learning highly nonlinear feature representations and metrics in this end-to-end architecture. Experiments on a widely used music similarity measure dataset show that our method significantly outperforms three state-of-the-art music metric learning methods. Experiments also show that the learned features better preserve the partial orders of the relative similarity than handcrafted features.

up
1 user has voted: Yougen Yuan