Sorry, you need to enable JavaScript to visit this website.

Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder

Citation Author(s):
Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su
Submitted by:
Yun-Nung Chen
Last updated:
10 May 2019 - 8:16pm
Document Type:
Presentation Slides
Document Year:
2019
Event:
Presenters:
Yun-Nung Chen
Paper Code:
AASP-L7.3
 

Automatic melody generation has been a long-time aspiration for both AI researchers and musicians. However, learning to generate euphonious melodies has turned out to be highly challenging. This paper introduces 1) a new variant of variational autoencoder (VAE), where the model structure is designed in a modularized manner in order to model polyphonic and dynamic music with domain knowledge, and 2) a hierarchical encoding/decoding strategy, which explicitly models the dependency between melodic features. The proposed framework is capable of generating distinct melodies that sounds natural, and the experiments for evaluating generated music clips show that the proposed model outperforms the baselines in human evaluation.

up
2 users have voted: Tzu-Chuan Lin, Yu-An Wang