Sorry, you need to enable JavaScript to visit this website.

Music Signal Processing

A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION


In this paper, we build up a hybrid neural network (NN) for singing melody extraction from polyphonic music by imitating human pitch perception. For human hearing, there are two pitch perception models, the spectral model and the temporal model, in accordance with whether harmonics are resolved or not. Here, we first use NNs to implement individual models and evaluate their performance in the task of singing melody extraction.

Paper Details

Authors:
Hsin Chou, Ming-Tso Chen, and Tai-Shih Chi
Submitted On:
22 April 2018 - 6:10am
Short Link:
Type:
Event:

Document Files

A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION.pdf

(30 downloads)

Keywords

Subscribe

[1] Hsin Chou, Ming-Tso Chen, and Tai-Shih Chi, "A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3131. Accessed: Jun. 22, 2018.
@article{3131-18,
url = {http://sigport.org/3131},
author = {Hsin Chou; Ming-Tso Chen; and Tai-Shih Chi },
publisher = {IEEE SigPort},
title = {A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION},
year = {2018} }
TY - EJOUR
T1 - A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION
AU - Hsin Chou; Ming-Tso Chen; and Tai-Shih Chi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3131
ER -
Hsin Chou, Ming-Tso Chen, and Tai-Shih Chi. (2018). A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION. IEEE SigPort. http://sigport.org/3131
Hsin Chou, Ming-Tso Chen, and Tai-Shih Chi, 2018. A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION. Available at: http://sigport.org/3131.
Hsin Chou, Ming-Tso Chen, and Tai-Shih Chi. (2018). "A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION." Web.
1. Hsin Chou, Ming-Tso Chen, and Tai-Shih Chi. A HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3131

MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING


In this paper, we design a novel deep learning based hybrid system for automatic chord recognition. Currently, there is a bottleneck in the amount of enough annotated data for training robust acoustic models, as hand annotating time-synchronized chord labels requires professional musical skills and considerable labor. As a solution to this problem, we construct a large set of time synchronized MIDI-audio pairs, and use these data to train a Deep Residual Network (DRN) feature extractor, which can then estimate pitch class activations of real-world music audio recordings.

Paper Details

Authors:
Submitted On:
19 April 2018 - 10:25pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018Poster_WuYiming.pdf

(52 downloads)

Keywords

Subscribe

[1] , "MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3056. Accessed: Jun. 22, 2018.
@article{3056-18,
url = {http://sigport.org/3056},
author = { },
publisher = {IEEE SigPort},
title = {MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING},
year = {2018} }
TY - EJOUR
T1 - MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3056
ER -
. (2018). MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING. IEEE SigPort. http://sigport.org/3056
, 2018. MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING. Available at: http://sigport.org/3056.
. (2018). "MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING." Web.
1. . MUSIC CHORD RECOGNITION BASED ON MIDI-TRAINED DEEP FEATURE AND BLSTM-CRF HYBIRD DECODING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3056

CREPE: A Convolutional Representation for Pitch Estimation


The task of estimating the fundamental frequency of a monophonic sound recording, also known as pitch tracking, is fundamental to audio processing with multiple applications in speech processing and music information retrieval. To date, the best performing techniques, such as the pYIN algorithm, are based on a combination of DSP pipelines and heuristics. While such techniques perform very well on average, there remain many cases in which they fail to correctly estimate the pitch.

crepe.pdf

PDF icon crepe.pdf (32 downloads)

Paper Details

Authors:
Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello
Submitted On:
19 April 2018 - 8:23pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

crepe.pdf

(32 downloads)

Keywords

Subscribe

[1] Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello, "CREPE: A Convolutional Representation for Pitch Estimation", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3042. Accessed: Jun. 22, 2018.
@article{3042-18,
url = {http://sigport.org/3042},
author = {Jong Wook Kim; Justin Salamon; Peter Li; Juan Pablo Bello },
publisher = {IEEE SigPort},
title = {CREPE: A Convolutional Representation for Pitch Estimation},
year = {2018} }
TY - EJOUR
T1 - CREPE: A Convolutional Representation for Pitch Estimation
AU - Jong Wook Kim; Justin Salamon; Peter Li; Juan Pablo Bello
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3042
ER -
Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello. (2018). CREPE: A Convolutional Representation for Pitch Estimation. IEEE SigPort. http://sigport.org/3042
Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello, 2018. CREPE: A Convolutional Representation for Pitch Estimation. Available at: http://sigport.org/3042.
Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello. (2018). "CREPE: A Convolutional Representation for Pitch Estimation." Web.
1. Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello. CREPE: A Convolutional Representation for Pitch Estimation [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3042

SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK

Paper Details

Authors:
Zhe-Cheng Fan, Yen-Lin Lai, and Jhy-Shing Roger Jang
Submitted On:
17 April 2018 - 8:21pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

fan18icassp_poster.pdf

(45 downloads)

Keywords

Subscribe

[1] Zhe-Cheng Fan, Yen-Lin Lai, and Jhy-Shing Roger Jang, "SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2951. Accessed: Jun. 22, 2018.
@article{2951-18,
url = {http://sigport.org/2951},
author = {Zhe-Cheng Fan; Yen-Lin Lai; and Jhy-Shing Roger Jang },
publisher = {IEEE SigPort},
title = {SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK},
year = {2018} }
TY - EJOUR
T1 - SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK
AU - Zhe-Cheng Fan; Yen-Lin Lai; and Jhy-Shing Roger Jang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2951
ER -
Zhe-Cheng Fan, Yen-Lin Lai, and Jhy-Shing Roger Jang. (2018). SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK. IEEE SigPort. http://sigport.org/2951
Zhe-Cheng Fan, Yen-Lin Lai, and Jhy-Shing Roger Jang, 2018. SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK. Available at: http://sigport.org/2951.
Zhe-Cheng Fan, Yen-Lin Lai, and Jhy-Shing Roger Jang. (2018). "SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK." Web.
1. Zhe-Cheng Fan, Yen-Lin Lai, and Jhy-Shing Roger Jang. SVSGAN: SINGING VOICE SEPARATION VIA GENERATIVE ADVERSARIAL NETWORK [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2951

Vocal melody extraction using patch-based CNN


A patch-based convolutional neural network (CNN) model presented in this paper for vocal melody extraction in polyphonic music is inspired from object detection in image processing. The input of the model is a novel time-frequency representation which enhances the pitch contours and suppresses the harmonic components of a signal. This succinct data representation and the patch-based CNN model enable an efficient training process with limited labeled data. Experiments on various datasets show excellent speed and competitive accuracy comparing to other deep learning approaches.

Paper Details

Authors:
Submitted On:
17 April 2018 - 8:41am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icassp_v2.pdf

(41 downloads)

Keywords

Subscribe

[1] , "Vocal melody extraction using patch-based CNN", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2932. Accessed: Jun. 22, 2018.
@article{2932-18,
url = {http://sigport.org/2932},
author = { },
publisher = {IEEE SigPort},
title = {Vocal melody extraction using patch-based CNN},
year = {2018} }
TY - EJOUR
T1 - Vocal melody extraction using patch-based CNN
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2932
ER -
. (2018). Vocal melody extraction using patch-based CNN. IEEE SigPort. http://sigport.org/2932
, 2018. Vocal melody extraction using patch-based CNN. Available at: http://sigport.org/2932.
. (2018). "Vocal melody extraction using patch-based CNN." Web.
1. . Vocal melody extraction using patch-based CNN [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2932

POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS


Automatic transcription of polyphonic music remains a challenging task in the field of Music Information Retrieval. In this paper, we propose a new method to post-process the output of a multi-pitch detection model using recurrent neural networks. In particular, we compare the use of a fixed sample rate against a meter-constrained time step on a piano performance audio dataset. The metric ground truth is estimated using automatic symbolic alignment, which we make available for further study.

Paper Details

Authors:
Adrien Ycart, Emmanouil Benetos
Submitted On:
16 April 2018 - 1:41pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Adrien Ycart ICASSP 2018 poster A0.pdf

(37 downloads)

Keywords

Subscribe

[1] Adrien Ycart, Emmanouil Benetos , "POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS ", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2915. Accessed: Jun. 22, 2018.
@article{2915-18,
url = {http://sigport.org/2915},
author = {Adrien Ycart; Emmanouil Benetos },
publisher = {IEEE SigPort},
title = {POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS },
year = {2018} }
TY - EJOUR
T1 - POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS
AU - Adrien Ycart; Emmanouil Benetos
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2915
ER -
Adrien Ycart, Emmanouil Benetos . (2018). POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS . IEEE SigPort. http://sigport.org/2915
Adrien Ycart, Emmanouil Benetos , 2018. POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS . Available at: http://sigport.org/2915.
Adrien Ycart, Emmanouil Benetos . (2018). "POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS ." Web.
1. Adrien Ycart, Emmanouil Benetos . POLYPHONIC MUSIC SEQUENCE TRANSDUCTION WITH METER-CONSTRAINED LSTM NETWORKS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2915

TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING


Spoken content processing (such as retrieval and browsing) is maturing, but the singing content is still almost completely left out. Songs are human voice carrying plenty of semantic information just as speech, and may be considered as a special type of speech with highly flexible prosody. The various problems in song audio, for example the significantly changing phone duration over highly flexible pitch contours, make the recognition of lyrics from song audio much more difficult. This paper reports an initial attempt towards this goal.

poster_v4.pdf

PDF icon poster_v4.pdf (27 downloads)

Paper Details

Authors:
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee
Submitted On:
15 April 2018 - 12:49am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_v4.pdf

(27 downloads)

Keywords

Subscribe

[1] Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee, "TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2878. Accessed: Jun. 22, 2018.
@article{2878-18,
url = {http://sigport.org/2878},
author = {Che-Ping Tsai; Yi-Lin Tuan; Lin-shan Lee },
publisher = {IEEE SigPort},
title = {TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING},
year = {2018} }
TY - EJOUR
T1 - TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING
AU - Che-Ping Tsai; Yi-Lin Tuan; Lin-shan Lee
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2878
ER -
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee. (2018). TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING. IEEE SigPort. http://sigport.org/2878
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee, 2018. TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING. Available at: http://sigport.org/2878.
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee. (2018). "TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING." Web.
1. Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee. TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2878

EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS


So far, few cover song identification systems that utilize index techniques achieve great success. In this paper, we propose a novel approach based on skipping bigrams that could be used for effective index. By applying Vector Quantization, our algorithm encodes signals into code sequences. Then, the bigram histograms of code sequences are used to represent the original recordings and measure their similarities. Through Vector Quantization and skipping bigrams, our model shows great robustness against speed and structure variations in cover songs.

Paper Details

Authors:
Xiaoshuo Xu, Xiaoou Chen, Deshun Yang
Submitted On:
13 April 2018 - 11:45pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS.pdf

(36 downloads)

Keywords

Subscribe

[1] Xiaoshuo Xu, Xiaoou Chen, Deshun Yang, "EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2790. Accessed: Jun. 22, 2018.
@article{2790-18,
url = {http://sigport.org/2790},
author = {Xiaoshuo Xu; Xiaoou Chen; Deshun Yang },
publisher = {IEEE SigPort},
title = {EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS},
year = {2018} }
TY - EJOUR
T1 - EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS
AU - Xiaoshuo Xu; Xiaoou Chen; Deshun Yang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2790
ER -
Xiaoshuo Xu, Xiaoou Chen, Deshun Yang. (2018). EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS. IEEE SigPort. http://sigport.org/2790
Xiaoshuo Xu, Xiaoou Chen, Deshun Yang, 2018. EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS. Available at: http://sigport.org/2790.
Xiaoshuo Xu, Xiaoou Chen, Deshun Yang. (2018). "EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS." Web.
1. Xiaoshuo Xu, Xiaoou Chen, Deshun Yang. EFFECTIVE COVER SONG IDENTIFICATION BASED ON SKIPPING BIGRAMS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2790

A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK

Paper Details

Authors:
Shuchang Liu; Li Guo; Geraint A. Wiggins
Submitted On:
13 April 2018 - 9:52am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp_poster_new.pdf

(34 downloads)

Keywords

Subscribe

[1] Shuchang Liu; Li Guo; Geraint A. Wiggins, "A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2703. Accessed: Jun. 22, 2018.
@article{2703-18,
url = {http://sigport.org/2703},
author = {Shuchang Liu; Li Guo; Geraint A. Wiggins },
publisher = {IEEE SigPort},
title = {A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK},
year = {2018} }
TY - EJOUR
T1 - A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK
AU - Shuchang Liu; Li Guo; Geraint A. Wiggins
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2703
ER -
Shuchang Liu; Li Guo; Geraint A. Wiggins. (2018). A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK. IEEE SigPort. http://sigport.org/2703
Shuchang Liu; Li Guo; Geraint A. Wiggins, 2018. A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK. Available at: http://sigport.org/2703.
Shuchang Liu; Li Guo; Geraint A. Wiggins. (2018). "A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK." Web.
1. Shuchang Liu; Li Guo; Geraint A. Wiggins. A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2703

Bayesian anisotropic Gaussian model for audio source separation


In audio source separation applications, it is common to model the sources as circular-symmetric Gaussian random variables, which is equivalent to assuming that the phase of each source is uniformly distributed. In this paper, we introduce an anisotropic Gaussian source model in which both the magnitude and phase parameters are modeled as random variables. In such a model, it becomes possible to promote a phase value that originates from a signal model and to adjust the relative importance of this underlying model-based phase constraint.

Paper Details

Authors:
Paul Magron, Tuomas Virtanen
Submitted On:
20 April 2018 - 8:22pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp18_magron.pdf

(25 downloads)

Keywords

Subscribe

[1] Paul Magron, Tuomas Virtanen, "Bayesian anisotropic Gaussian model for audio source separation", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2630. Accessed: Jun. 22, 2018.
@article{2630-18,
url = {http://sigport.org/2630},
author = {Paul Magron; Tuomas Virtanen },
publisher = {IEEE SigPort},
title = {Bayesian anisotropic Gaussian model for audio source separation},
year = {2018} }
TY - EJOUR
T1 - Bayesian anisotropic Gaussian model for audio source separation
AU - Paul Magron; Tuomas Virtanen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2630
ER -
Paul Magron, Tuomas Virtanen. (2018). Bayesian anisotropic Gaussian model for audio source separation. IEEE SigPort. http://sigport.org/2630
Paul Magron, Tuomas Virtanen, 2018. Bayesian anisotropic Gaussian model for audio source separation. Available at: http://sigport.org/2630.
Paul Magron, Tuomas Virtanen. (2018). "Bayesian anisotropic Gaussian model for audio source separation." Web.
1. Paul Magron, Tuomas Virtanen. Bayesian anisotropic Gaussian model for audio source separation [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2630

Pages