Sorry, you need to enable JavaScript to visit this website.

Music Source Separation with Band-Split RoPE Transformer

DOI:
10.60864/8nm2-j542
Citation Author(s):
Submitted by:
Wei Tsung Lu
Last updated:
16 April 2024 - 10:36pm
Document Type:
Presentation Slides
Document Year:
2024
Event:
Presenters:
Wei-Tsung Lu
Paper Code:
AASP-L5.2
 

Music source separation (MSS) aims to separate a music recording into multiple musically distinct stems, such as vocals, bass, drums, and more. Recently, deep learning approaches such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been used, but the improvement is still limited. In this paper, we propose a novel frequency-domain approach based on a Band-Split RoPE Transformer (called BS-RoFormer). BS-RoFormer relies on a band-split module to project the input complex spectrogram into subband-level representations, and then arranges a stack of hierar- chical Transformers to model the inner-band as well as inter-band sequences for multi-band mask estimation. To facilitate training the model for MSS, we propose to use the Rotary Position Embedding (RoPE). The BS-RoFormer system trained on MUSDB18HQ and 500 extra songs ranked the first place in the MSS track of Sound Demixing Challenge (SDX’23). Benchmarking a smaller version of BS-RoFormer on MUSDB18HQ, we achieve state-of-the-art result without extra training data, with 9.80 dB of average SDR.

up
0 users have voted: