Sorry, you need to enable JavaScript to visit this website.

facebooktwittermailshare

Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction

Abstract: 

The state of the art in music source separation employs neural networks trained in a supervised fashion on multi-track databases to estimate the sources from a given mixture. With only few datasets available, often extensive data augmentation is used to combat overfitting. Mixing random tracks, however, can even reduce separation performance as instruments in real music are strongly correlated. The key concept in our approach is that source estimates of an optimal separator should be indistinguishable from real source signals. Based on this idea, we drive the separator towards outputs deemed as realistic by discriminator networks that are trained to tell apart real from separator samples. This way, we can also use unpaired source and mixture recordings without the drawbacks of creating unrealistic music mixtures. Our framework is widely applicable as it does not assume a specific network architecture or number of sources. To our knowledge, this is the first adoption of adversarial training for music source separation. In a prototype experiment for singing voice separation, separation performance increases with our approach compared to purely supervised training.

up
0 users have voted:

Paper Details

Authors:
Daniel Stoller, Sebastian Ewert, Simon Dixon
Submitted On:
18 April 2018 - 3:25pm
Short Link:
Type:
Presentation Slides
Event:
Presenter's Name:
Daniel Stoller
Paper Code:
2331
Document Year:
2018
Cite

Document Files

Presentation slides version 3

(47 downloads)

Presentation slides final version

(126 downloads)

Subscribe

[1] Daniel Stoller, Sebastian Ewert, Simon Dixon, "Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2901. Accessed: Jul. 16, 2018.
@article{2901-18,
url = {http://sigport.org/2901},
author = {Daniel Stoller; Sebastian Ewert; Simon Dixon },
publisher = {IEEE SigPort},
title = {Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction},
year = {2018} }
TY - EJOUR
T1 - Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction
AU - Daniel Stoller; Sebastian Ewert; Simon Dixon
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2901
ER -
Daniel Stoller, Sebastian Ewert, Simon Dixon. (2018). Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction. IEEE SigPort. http://sigport.org/2901
Daniel Stoller, Sebastian Ewert, Simon Dixon, 2018. Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction. Available at: http://sigport.org/2901.
Daniel Stoller, Sebastian Ewert, Simon Dixon. (2018). "Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction." Web.
1. Daniel Stoller, Sebastian Ewert, Simon Dixon. Semi-Supervised Adversarial Audio Source Separation applied to Singing Voice Extraction [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2901