Sorry, you need to enable JavaScript to visit this website.

Applications in Music and Audio Processing (MLR-MUSI)

TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING


Spoken content processing (such as retrieval and browsing) is maturing, but the singing content is still almost completely left out. Songs are human voice carrying plenty of semantic information just as speech, and may be considered as a special type of speech with highly flexible prosody. The various problems in song audio, for example the significantly changing phone duration over highly flexible pitch contours, make the recognition of lyrics from song audio much more difficult. This paper reports an initial attempt towards this goal.

poster_v4.pdf

PDF icon poster_v4.pdf (55 downloads)

Paper Details

Authors:
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee
Submitted On:
15 April 2018 - 12:49am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_v4.pdf

(55 downloads)

Subscribe

[1] Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee, "TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2878. Accessed: Aug. 19, 2018.
@article{2878-18,
url = {http://sigport.org/2878},
author = {Che-Ping Tsai; Yi-Lin Tuan; Lin-shan Lee },
publisher = {IEEE SigPort},
title = {TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING},
year = {2018} }
TY - EJOUR
T1 - TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING
AU - Che-Ping Tsai; Yi-Lin Tuan; Lin-shan Lee
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2878
ER -
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee. (2018). TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING. IEEE SigPort. http://sigport.org/2878
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee, 2018. TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING. Available at: http://sigport.org/2878.
Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee. (2018). "TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING." Web.
1. Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee. TRANSCRIBING LYRICS FROM COMMERCIAL SONG AUDIO: THE FIRST STEP TOWARDS SINGING CONTENT PROCESSING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2878

Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection

Paper Details

Authors:
Josh Fromm, Matthai Philipose, Ivan Tashev, Shuayb Zarar
Submitted On:
14 April 2018 - 3:25pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP (2018_04_14).pdf

(82 downloads)

Subscribe

[1] Josh Fromm, Matthai Philipose, Ivan Tashev, Shuayb Zarar, "Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2849. Accessed: Aug. 19, 2018.
@article{2849-18,
url = {http://sigport.org/2849},
author = {Josh Fromm; Matthai Philipose; Ivan Tashev; Shuayb Zarar },
publisher = {IEEE SigPort},
title = {Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection},
year = {2018} }
TY - EJOUR
T1 - Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection
AU - Josh Fromm; Matthai Philipose; Ivan Tashev; Shuayb Zarar
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2849
ER -
Josh Fromm, Matthai Philipose, Ivan Tashev, Shuayb Zarar. (2018). Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection. IEEE SigPort. http://sigport.org/2849
Josh Fromm, Matthai Philipose, Ivan Tashev, Shuayb Zarar, 2018. Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection. Available at: http://sigport.org/2849.
Josh Fromm, Matthai Philipose, Ivan Tashev, Shuayb Zarar. (2018). "Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection." Web.
1. Josh Fromm, Matthai Philipose, Ivan Tashev, Shuayb Zarar. Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2849

CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION


The greedy decoding method used in the conventional sequence-to-sequence models is prone to producing a model with a compounding
of errors, mainly because it makes inferences in a fixed order, regardless of whether or not the model’s previous guesses are correct.
We propose a non-sequential greedy decoding method that generalizes the greedy decoding schemes proposed in the past. The proposed
method determines not only which token to consider, but also which position in the output sequence to infer at each inference step.

Paper Details

Authors:
Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, Jonghun Park
Submitted On:
13 April 2018 - 12:22am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

NSGD_poster_at_ICASSP2018_v1.1.pdf

(78 downloads)

Subscribe

[1] Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, Jonghun Park, "CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2586. Accessed: Aug. 19, 2018.
@article{2586-18,
url = {http://sigport.org/2586},
author = {Moon-jung Chae; Kyubyong Park; Jinhyun Bang; Soobin Suh; Jonghyuk Park; Namju Kim; Jonghun Park },
publisher = {IEEE SigPort},
title = {CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION},
year = {2018} }
TY - EJOUR
T1 - CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION
AU - Moon-jung Chae; Kyubyong Park; Jinhyun Bang; Soobin Suh; Jonghyuk Park; Namju Kim; Jonghun Park
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2586
ER -
Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, Jonghun Park. (2018). CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION. IEEE SigPort. http://sigport.org/2586
Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, Jonghun Park, 2018. CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION. Available at: http://sigport.org/2586.
Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, Jonghun Park. (2018). "CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION." Web.
1. Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, Jonghun Park. CONVOLUTIONAL SEQUENCE TO SEQUENCE MODEL WITH NON-SEQUENTIAL GREEDY DECODING FOR GRAPHEME TO PHONEME CONVERSION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2586

Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition


We present a feature engineering pipeline for the construction of musical signal characteristics, to be used for the design of a supervised model for musical genre identification. The key idea is to extend the traditional two-step process of extraction and classification with additive stand-alone phases which are no longer organized in a waterfall scheme. The whole system is realized by traversing backtrack arrows and cycles between various stages.

Paper Details

Authors:
Alessandro Tibo, Paolo Bientinesi
Submitted On:
12 April 2018 - 2:00pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Feature_Engineering_Pipeline

(252 downloads)

Subscribe

[1] Alessandro Tibo, Paolo Bientinesi, "Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2459. Accessed: Aug. 19, 2018.
@article{2459-18,
url = {http://sigport.org/2459},
author = {Alessandro Tibo; Paolo Bientinesi },
publisher = {IEEE SigPort},
title = {Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition},
year = {2018} }
TY - EJOUR
T1 - Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition
AU - Alessandro Tibo; Paolo Bientinesi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2459
ER -
Alessandro Tibo, Paolo Bientinesi. (2018). Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition. IEEE SigPort. http://sigport.org/2459
Alessandro Tibo, Paolo Bientinesi, 2018. Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition. Available at: http://sigport.org/2459.
Alessandro Tibo, Paolo Bientinesi. (2018). "Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition." Web.
1. Alessandro Tibo, Paolo Bientinesi. Extended Pipeline for Content-Based Feature Engineering in Music Genre Recognition [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2459

Deep ranking: triplet matchnet for music metric learning


Metric learning for music is an important problem for many music information retrieval (MIR) applications such as music generation, analysis, retrieval, classification and recommendation. Traditional music metrics are mostly defined on linear transformations of handcrafted audio features, and may be improper in many situations given the large variety of mu- sic styles and instrumentations.

Paper Details

Authors:
Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang
Submitted On:
2 March 2017 - 2:56am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

presentation.pdf

(1009 downloads)

Subscribe

[1] Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang, "Deep ranking: triplet matchnet for music metric learning ", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1574. Accessed: Aug. 19, 2018.
@article{1574-17,
url = {http://sigport.org/1574},
author = {Rui Lu; Kailun Wu; Zhiyao Duan; Changshui Zhang },
publisher = {IEEE SigPort},
title = {Deep ranking: triplet matchnet for music metric learning },
year = {2017} }
TY - EJOUR
T1 - Deep ranking: triplet matchnet for music metric learning
AU - Rui Lu; Kailun Wu; Zhiyao Duan; Changshui Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1574
ER -
Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang. (2017). Deep ranking: triplet matchnet for music metric learning . IEEE SigPort. http://sigport.org/1574
Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang, 2017. Deep ranking: triplet matchnet for music metric learning . Available at: http://sigport.org/1574.
Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang. (2017). "Deep ranking: triplet matchnet for music metric learning ." Web.
1. Rui Lu, Kailun Wu, Zhiyao Duan, Changshui Zhang. Deep ranking: triplet matchnet for music metric learning [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1574

Song recommendation with Non-Negative Matrix factorization and graph total variation


Song recommendation with Non-Negative Matrix factorization and graph total variation

This work formulates song recommendation as a matrix completion problem that benefits from collaborative filter- ing through Non-negative Matrix Factorization (NMF) and content-based filtering via total variation (TV) on graphs. The graphs encode both playlist proximity information and song similarity, using a rich combination of audio, meta-data and social features. As we demonstrate, our hybrid recom- mendation system is very versatile and incorporates several well-known methods while outperforming them. Particularly, we show on real-world data that our model overcomes w.r.t.

Paper Details

Authors:
Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre Vandergheynst
Submitted On:
20 March 2016 - 12:15am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

icassp_2016_2.pdf

(323 downloads)

Subscribe

[1] Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre Vandergheynst, "Song recommendation with Non-Negative Matrix factorization and graph total variation", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/856. Accessed: Aug. 19, 2018.
@article{856-16,
url = {http://sigport.org/856},
author = {Kirell Benzi; Vassilis Kalofolias; Xavier Bresson; Pierre Vandergheynst },
publisher = {IEEE SigPort},
title = {Song recommendation with Non-Negative Matrix factorization and graph total variation},
year = {2016} }
TY - EJOUR
T1 - Song recommendation with Non-Negative Matrix factorization and graph total variation
AU - Kirell Benzi; Vassilis Kalofolias; Xavier Bresson; Pierre Vandergheynst
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/856
ER -
Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre Vandergheynst. (2016). Song recommendation with Non-Negative Matrix factorization and graph total variation. IEEE SigPort. http://sigport.org/856
Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre Vandergheynst, 2016. Song recommendation with Non-Negative Matrix factorization and graph total variation. Available at: http://sigport.org/856.
Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre Vandergheynst. (2016). "Song recommendation with Non-Negative Matrix factorization and graph total variation." Web.
1. Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, Pierre Vandergheynst. Song recommendation with Non-Negative Matrix factorization and graph total variation [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/856

Emotion Classification: How Does an Automated System Compare to Naive Human Coders?


The fact that emotions play a vital role in social interactions, along with the demand for novel human-computer interaction applications, have led to the development of a number of automatic emotion classification systems. However, it is still debatable whether the performance of such systems can compare with human coders. To address this issue, in this study, we present a comprehensive comparison in a speech-based emotion classification task between 138 Amazon Mechanical Turk workers (Turkers) and a state-of-the-art automatic computer system.

Paper Details

Authors:
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman
Submitted On:
17 March 2016 - 3:26pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

EmotionICASSP16.pdf

(464 downloads)

Subscribe

[1] Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman, "Emotion Classification: How Does an Automated System Compare to Naive Human Coders?", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/749. Accessed: Aug. 19, 2018.
@article{749-16,
url = {http://sigport.org/749},
author = {Kenneth Imade; Na Yang; Melissa Sturge-Apple; Zhiyao Duan; Wendi Heinzelman },
publisher = {IEEE SigPort},
title = {Emotion Classification: How Does an Automated System Compare to Naive Human Coders?},
year = {2016} }
TY - EJOUR
T1 - Emotion Classification: How Does an Automated System Compare to Naive Human Coders?
AU - Kenneth Imade; Na Yang; Melissa Sturge-Apple; Zhiyao Duan; Wendi Heinzelman
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/749
ER -
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. (2016). Emotion Classification: How Does an Automated System Compare to Naive Human Coders?. IEEE SigPort. http://sigport.org/749
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman, 2016. Emotion Classification: How Does an Automated System Compare to Naive Human Coders?. Available at: http://sigport.org/749.
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. (2016). "Emotion Classification: How Does an Automated System Compare to Naive Human Coders?." Web.
1. Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. Emotion Classification: How Does an Automated System Compare to Naive Human Coders? [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/749

Emotion Classification: How Does an Automated System Compare to Naive Human Coders?


The fact that emotions play a vital role in social interactions, along with the demand for novel human-computer interaction applications, have led to the development of a number of automatic emotion classification systems. However, it is still debatable whether the performance of such systems can compare with human coders. To address this issue, in this study, we present a comprehensive comparison in a speech-based emotion classification task between 138 Amazon Mechanical Turk workers (Turkers) and a state-of-the-art automatic computer system.

Paper Details

Authors:
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman
Submitted On:
17 March 2016 - 3:26pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

EmotionICASSP16.pdf

(464 downloads)

Subscribe

[1] Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman, "Emotion Classification: How Does an Automated System Compare to Naive Human Coders?", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/748. Accessed: Aug. 19, 2018.
@article{748-16,
url = {http://sigport.org/748},
author = {Kenneth Imade; Na Yang; Melissa Sturge-Apple; Zhiyao Duan; Wendi Heinzelman },
publisher = {IEEE SigPort},
title = {Emotion Classification: How Does an Automated System Compare to Naive Human Coders?},
year = {2016} }
TY - EJOUR
T1 - Emotion Classification: How Does an Automated System Compare to Naive Human Coders?
AU - Kenneth Imade; Na Yang; Melissa Sturge-Apple; Zhiyao Duan; Wendi Heinzelman
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/748
ER -
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. (2016). Emotion Classification: How Does an Automated System Compare to Naive Human Coders?. IEEE SigPort. http://sigport.org/748
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman, 2016. Emotion Classification: How Does an Automated System Compare to Naive Human Coders?. Available at: http://sigport.org/748.
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. (2016). "Emotion Classification: How Does an Automated System Compare to Naive Human Coders?." Web.
1. Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. Emotion Classification: How Does an Automated System Compare to Naive Human Coders? [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/748

Emotion Classification: How Does an Automated System Compare to Naive Human Coders?


The fact that emotions play a vital role in social interactions, along with the demand for novel human-computer interaction applications, have led to the development of a number of automatic emotion classification systems. However, it is still debatable whether the performance of such systems can compare with human coders. To address this issue, in this study, we present a comprehensive comparison in a speech-based emotion classification task between 138 Amazon Mechanical Turk workers (Turkers) and a state-of-the-art automatic computer system.

Paper Details

Authors:
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman
Submitted On:
17 March 2016 - 3:26pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

EmotionICASSP16.pptx

(301 downloads)

Subscribe

[1] Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman, "Emotion Classification: How Does an Automated System Compare to Naive Human Coders?", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/747. Accessed: Aug. 19, 2018.
@article{747-16,
url = {http://sigport.org/747},
author = {Kenneth Imade; Na Yang; Melissa Sturge-Apple; Zhiyao Duan; Wendi Heinzelman },
publisher = {IEEE SigPort},
title = {Emotion Classification: How Does an Automated System Compare to Naive Human Coders?},
year = {2016} }
TY - EJOUR
T1 - Emotion Classification: How Does an Automated System Compare to Naive Human Coders?
AU - Kenneth Imade; Na Yang; Melissa Sturge-Apple; Zhiyao Duan; Wendi Heinzelman
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/747
ER -
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. (2016). Emotion Classification: How Does an Automated System Compare to Naive Human Coders?. IEEE SigPort. http://sigport.org/747
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman, 2016. Emotion Classification: How Does an Automated System Compare to Naive Human Coders?. Available at: http://sigport.org/747.
Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. (2016). "Emotion Classification: How Does an Automated System Compare to Naive Human Coders?." Web.
1. Kenneth Imade, Na Yang, Melissa Sturge-Apple, Zhiyao Duan, Wendi Heinzelman. Emotion Classification: How Does an Automated System Compare to Naive Human Coders? [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/747

Feature Adapted Convolutional Neural Networks for Downbeat Tracking


test.pdf

PDF icon test.pdf (338 downloads)

Paper Details

Authors:
Durand, S. and Bello, J. P and Bertrand, D. and Richard, G.
Submitted On:
14 March 2016 - 2:03pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

test.pdf

(338 downloads)

Subscribe

[1] Durand, S. and Bello, J. P and Bertrand, D. and Richard, G., "Feature Adapted Convolutional Neural Networks for Downbeat Tracking", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/678. Accessed: Aug. 19, 2018.
@article{678-16,
url = {http://sigport.org/678},
author = {Durand; S. and Bello; J. P and Bertrand; D. and Richard; G. },
publisher = {IEEE SigPort},
title = {Feature Adapted Convolutional Neural Networks for Downbeat Tracking},
year = {2016} }
TY - EJOUR
T1 - Feature Adapted Convolutional Neural Networks for Downbeat Tracking
AU - Durand; S. and Bello; J. P and Bertrand; D. and Richard; G.
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/678
ER -
Durand, S. and Bello, J. P and Bertrand, D. and Richard, G.. (2016). Feature Adapted Convolutional Neural Networks for Downbeat Tracking. IEEE SigPort. http://sigport.org/678
Durand, S. and Bello, J. P and Bertrand, D. and Richard, G., 2016. Feature Adapted Convolutional Neural Networks for Downbeat Tracking. Available at: http://sigport.org/678.
Durand, S. and Bello, J. P and Bertrand, D. and Richard, G.. (2016). "Feature Adapted Convolutional Neural Networks for Downbeat Tracking." Web.
1. Durand, S. and Bello, J. P and Bertrand, D. and Richard, G.. Feature Adapted Convolutional Neural Networks for Downbeat Tracking [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/678