Documents
Poster
Generalized Multi-Source Inference for Text Conditioned Music Diffusion Models
- DOI:
- 10.60864/q145-mh78
- Citation Author(s):
- Submitted by:
- Emilian Postolache
- Last updated:
- 6 June 2024 - 10:22am
- Document Type:
- Poster
- Document Year:
- 2024
- Event:
- Presenters:
- Emilian Postolache
- Paper Code:
- MLSP-P25.5
- Categories:
- Keywords:
- Log in to post comments
Multi-Source Diffusion Models (MSDM) allow for compositional musical generation tasks: generating a set of coherent sources, creating accompaniments, and performing source separation. Despite their versatility, they require estimating the joint distribution over the sources, necessitating pre-separated musical data, which is rarely available, and fixing the number and type of sources at training time. This paper generalizes MSDM to arbitrary time-domain diffusion models conditioned on text embeddings. These models do not require separated data as they are trained on mixtures, can parameterize an arbitrary number of sources, and allow for rich semantic control. We propose an inference procedure enabling the coherent generation of sources and accompaniments. Additionally, we adapt the Dirac separator of MSDM to perform source separation. We experiment with diffusion models trained on Slakh2100 and MTG-Jamendo, showcasing competitive generation and separation results in a relaxed data setting.