Sorry, you need to enable JavaScript to visit this website.

Music Signal Processing

MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION


Current state-of-the-art music boundary detection methods use local features for boundary detection, but such an approach fails to explicitly incorporate the statistical properties of the detected segments. This paper presents a music boundary detection method that simultaneously considers a fitness measure based on the boundary posterior probability, the likelihood of the segmentation duration sequence, and the acoustic consistency within a segment.

Paper Details

Authors:
Akira Maezawa
Submitted On:
15 May 2019 - 11:50am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019-maezawa.pdf

(29)

Subscribe

[1] Akira Maezawa, "MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4527. Accessed: Aug. 23, 2019.
@article{4527-19,
url = {http://sigport.org/4527},
author = {Akira Maezawa },
publisher = {IEEE SigPort},
title = {MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION},
year = {2019} }
TY - EJOUR
T1 - MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION
AU - Akira Maezawa
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4527
ER -
Akira Maezawa. (2019). MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION. IEEE SigPort. http://sigport.org/4527
Akira Maezawa, 2019. MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION. Available at: http://sigport.org/4527.
Akira Maezawa. (2019). "MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION." Web.
1. Akira Maezawa. MUSIC BOUNDARY DETECTION BASED ON A HYBRID DEEP MODEL OF NOVELTY, HOMOGENEITY, REPETITION AND DURATION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4527

Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder


Automatic melody generation has been a long-time aspiration for both AI researchers and musicians. However, learning to generate euphonious melodies has turned out to be highly challenging. This paper introduces 1) a new variant of variational autoencoder (VAE), where the model structure is designed in a modularized manner in order to model polyphonic and dynamic music with domain knowledge, and 2) a hierarchical encoding/decoding strategy, which explicitly models the dependency between melodic features.

Paper Details

Authors:
Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su
Submitted On:
10 May 2019 - 8:16pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP19_MusicVAE_slide.pdf

(26)

Subscribe

[1] Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su, "Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4426. Accessed: Aug. 23, 2019.
@article{4426-19,
url = {http://sigport.org/4426},
author = {Yu-An Wang; Yu-Kai Huang; Tzu-Chuan Lin; Shang-Yu Su },
publisher = {IEEE SigPort},
title = {Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder},
year = {2019} }
TY - EJOUR
T1 - Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder
AU - Yu-An Wang; Yu-Kai Huang; Tzu-Chuan Lin; Shang-Yu Su
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4426
ER -
Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su. (2019). Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder. IEEE SigPort. http://sigport.org/4426
Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su, 2019. Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder. Available at: http://sigport.org/4426.
Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su. (2019). "Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder." Web.
1. Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su. Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4426

Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence


We introduce the "Intonation" dataset of amateur vocal performances with a tendency for good intonation, collected from Smule, Inc. The dataset can be used for music information retrieval tasks such as autotuning, query by humming, and singing style analysis. It is available upon request on the Stanford CCRMA DAMP website. We describe a semi-supervised approach to selecting the audio recordings from a larger collection of performances based on intonation patterns.

Paper Details

Authors:
Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook
Submitted On:
10 May 2019 - 3:28pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_2019_intonation_final.pdf

(19)

Keywords

Additional Categories

Subscribe

[1] Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook, "Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4392. Accessed: Aug. 23, 2019.
@article{4392-19,
url = {http://sigport.org/4392},
author = {Sanna Wager; George Tzanetakis; Stefan Sullivan; Cheng-i Wang; John Shimmin; Minje Kim; Perry Cook },
publisher = {IEEE SigPort},
title = {Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence},
year = {2019} }
TY - EJOUR
T1 - Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence
AU - Sanna Wager; George Tzanetakis; Stefan Sullivan; Cheng-i Wang; John Shimmin; Minje Kim; Perry Cook
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4392
ER -
Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook. (2019). Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence. IEEE SigPort. http://sigport.org/4392
Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook, 2019. Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence. Available at: http://sigport.org/4392.
Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook. (2019). "Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence." Web.
1. Sanna Wager, George Tzanetakis, Stefan Sullivan, Cheng-i Wang, John Shimmin, Minje Kim, Perry Cook. Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4392

ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA

Paper Details

Authors:
Donmoon Lee, Jaejun Lee, Jeongsoo Park, and Kyogu Lee
Submitted On:
10 May 2019 - 12:59pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

1905_ICASSP_dmlee_compact.pdf

(16)

Subscribe

[1] Donmoon Lee, Jaejun Lee, Jeongsoo Park, and Kyogu Lee, "ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4372. Accessed: Aug. 23, 2019.
@article{4372-19,
url = {http://sigport.org/4372},
author = {Donmoon Lee; Jaejun Lee; Jeongsoo Park; and Kyogu Lee },
publisher = {IEEE SigPort},
title = {ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA},
year = {2019} }
TY - EJOUR
T1 - ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA
AU - Donmoon Lee; Jaejun Lee; Jeongsoo Park; and Kyogu Lee
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4372
ER -
Donmoon Lee, Jaejun Lee, Jeongsoo Park, and Kyogu Lee. (2019). ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA. IEEE SigPort. http://sigport.org/4372
Donmoon Lee, Jaejun Lee, Jeongsoo Park, and Kyogu Lee, 2019. ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA. Available at: http://sigport.org/4372.
Donmoon Lee, Jaejun Lee, Jeongsoo Park, and Kyogu Lee. (2019). "ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA." Web.
1. Donmoon Lee, Jaejun Lee, Jeongsoo Park, and Kyogu Lee. ENHANCING MUSIC FEATURES BY KNOWLEDGE TRANSFER FROM USER-ITEM LOG DATA [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4372

Modeling nonlinear audio effects with end-to-end deep neural networks


Audio processors whose parameters are modified periodically
over time are often referred as time-varying or modulation based
audio effects. Most existing methods for modeling these type of
effect units are often optimized to a very specific circuit and cannot
be efficiently generalized to other time-varying effects. Based on
convolutional and recurrent neural networks, we propose a deep
learning architecture for generic black-box modeling of audio processors
with long-term memory. We explore the capabilities of

Paper Details

Authors:
Emmanouil Benetos, Joshua D. Reiss
Submitted On:
10 May 2019 - 12:06pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP___Presentation_Martinez_Ramirez.pdf

(34)

Subscribe

[1] Emmanouil Benetos, Joshua D. Reiss, "Modeling nonlinear audio effects with end-to-end deep neural networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4368. Accessed: Aug. 23, 2019.
@article{4368-19,
url = {http://sigport.org/4368},
author = {Emmanouil Benetos; Joshua D. Reiss },
publisher = {IEEE SigPort},
title = {Modeling nonlinear audio effects with end-to-end deep neural networks},
year = {2019} }
TY - EJOUR
T1 - Modeling nonlinear audio effects with end-to-end deep neural networks
AU - Emmanouil Benetos; Joshua D. Reiss
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4368
ER -
Emmanouil Benetos, Joshua D. Reiss. (2019). Modeling nonlinear audio effects with end-to-end deep neural networks. IEEE SigPort. http://sigport.org/4368
Emmanouil Benetos, Joshua D. Reiss, 2019. Modeling nonlinear audio effects with end-to-end deep neural networks. Available at: http://sigport.org/4368.
Emmanouil Benetos, Joshua D. Reiss. (2019). "Modeling nonlinear audio effects with end-to-end deep neural networks." Web.
1. Emmanouil Benetos, Joshua D. Reiss. Modeling nonlinear audio effects with end-to-end deep neural networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4368

Singing voice separation: a study on training data.


In the recent years, singing voice separation systems showed increased performance due to the use of supervised training. The design of training datasets is known as a crucial factor in the performance of such systems. We investigate on how the characteristics of the training dataset impacts the separation performances of state-of-the-art singing voice separation algorithms. We show that the separation quality and diversity are two important and complementary assets of a good training dataset. We also provide insights on possible transforms to perform data augmentation for this task.

Paper Details

Authors:
Laure Prétet, Romain Hennequin, Jimena Royo-Letelier, Andrea Vaglio
Submitted On:
22 May 2019 - 11:32am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_poster_3382.pdf

(14)

Subscribe

[1] Laure Prétet, Romain Hennequin, Jimena Royo-Letelier, Andrea Vaglio, "Singing voice separation: a study on training data.", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4287. Accessed: Aug. 23, 2019.
@article{4287-19,
url = {http://sigport.org/4287},
author = {Laure Prétet; Romain Hennequin; Jimena Royo-Letelier; Andrea Vaglio },
publisher = {IEEE SigPort},
title = {Singing voice separation: a study on training data.},
year = {2019} }
TY - EJOUR
T1 - Singing voice separation: a study on training data.
AU - Laure Prétet; Romain Hennequin; Jimena Royo-Letelier; Andrea Vaglio
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4287
ER -
Laure Prétet, Romain Hennequin, Jimena Royo-Letelier, Andrea Vaglio. (2019). Singing voice separation: a study on training data.. IEEE SigPort. http://sigport.org/4287
Laure Prétet, Romain Hennequin, Jimena Royo-Letelier, Andrea Vaglio, 2019. Singing voice separation: a study on training data.. Available at: http://sigport.org/4287.
Laure Prétet, Romain Hennequin, Jimena Royo-Letelier, Andrea Vaglio. (2019). "Singing voice separation: a study on training data.." Web.
1. Laure Prétet, Romain Hennequin, Jimena Royo-Letelier, Andrea Vaglio. Singing voice separation: a study on training data. [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4287

CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction


Inspired by human hearing perception, we propose a twostage multi-resolution end-to-end model for singing melody extraction in this paper. The convolutional neural network (CNN) is the core of the proposed model to generate multiresolution representations. The 1-D and 2-D multi-resolution analysis on waveform and spectrogram-like graph are successively carried out by using 1-D and 2-D CNN kernels of different lengths and sizes.

Paper Details

Authors:
Bo-Jun Li, Tai-Shih Chi
Submitted On:
9 May 2019 - 1:00pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019_MINGTSO.pdf

(25)

Subscribe

[1] Bo-Jun Li, Tai-Shih Chi, "CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4223. Accessed: Aug. 23, 2019.
@article{4223-19,
url = {http://sigport.org/4223},
author = {Bo-Jun Li; Tai-Shih Chi },
publisher = {IEEE SigPort},
title = {CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction},
year = {2019} }
TY - EJOUR
T1 - CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction
AU - Bo-Jun Li; Tai-Shih Chi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4223
ER -
Bo-Jun Li, Tai-Shih Chi. (2019). CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction. IEEE SigPort. http://sigport.org/4223
Bo-Jun Li, Tai-Shih Chi, 2019. CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction. Available at: http://sigport.org/4223.
Bo-Jun Li, Tai-Shih Chi. (2019). "CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction." Web.
1. Bo-Jun Li, Tai-Shih Chi. CNN Based Two-Stage Multi-Resolution End-to-End Model for Singing Melody Extraction [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4223

End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model


Time-aligned lyrics can enrich the music listening experience by enabling karaoke, text-based song retrieval and intra-song navigation, and other applications. Compared to text-to-speech alignment, lyrics alignment remains highly challenging, despite many attempts to combine numerous sub-modules including vocal separation and detection in an effort to break down the problem. Furthermore, training required fine-grained annotations to be available in some form.

Paper Details

Authors:
Daniel Stoller, Simon Durand, Sebastian Ewert
Submitted On:
17 May 2019 - 5:14am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP 2019 Slides as presented in the session

(14)

Demo video for presentation slides

(23)

Subscribe

[1] Daniel Stoller, Simon Durand, Sebastian Ewert, "End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4220. Accessed: Aug. 23, 2019.
@article{4220-19,
url = {http://sigport.org/4220},
author = {Daniel Stoller; Simon Durand; Sebastian Ewert },
publisher = {IEEE SigPort},
title = {End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model},
year = {2019} }
TY - EJOUR
T1 - End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model
AU - Daniel Stoller; Simon Durand; Sebastian Ewert
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4220
ER -
Daniel Stoller, Simon Durand, Sebastian Ewert. (2019). End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model. IEEE SigPort. http://sigport.org/4220
Daniel Stoller, Simon Durand, Sebastian Ewert, 2019. End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model. Available at: http://sigport.org/4220.
Daniel Stoller, Simon Durand, Sebastian Ewert. (2019). "End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model." Web.
1. Daniel Stoller, Simon Durand, Sebastian Ewert. End-to-End Lyrics Alignment Using An Audio-to-Character Recognition Model [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4220

DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION

Paper Details

Authors:
Sebastian Böck, Gerhard Widmer
Submitted On:
9 May 2019 - 8:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp_2019_poster_to_print.pdf

(27)

Subscribe

[1] Sebastian Böck, Gerhard Widmer, "DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4198. Accessed: Aug. 23, 2019.
@article{4198-19,
url = {http://sigport.org/4198},
author = {Sebastian Böck; Gerhard Widmer },
publisher = {IEEE SigPort},
title = {DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION},
year = {2019} }
TY - EJOUR
T1 - DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION
AU - Sebastian Böck; Gerhard Widmer
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4198
ER -
Sebastian Böck, Gerhard Widmer. (2019). DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION. IEEE SigPort. http://sigport.org/4198
Sebastian Böck, Gerhard Widmer, 2019. DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION. Available at: http://sigport.org/4198.
Sebastian Böck, Gerhard Widmer. (2019). "DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION." Web.
1. Sebastian Böck, Gerhard Widmer. DEEP POLYPHONIC ADSR PIANO NOTE TRANSCRIPTION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4198

Deep Learning for Tube Amplifier Emulation


Analog audio effects and synthesizers often owe their distinct sound to circuit nonlinearities. Faithfully modeling such significant aspect of the original sound in virtual analog software can prove challenging. The current work proposes a generic data-driven approach to virtual analog modeling and applies it to the Fender Bassman 56F-A vacuum-tube amplifier. Specifically, a feedforward variant of the WaveNet deep neural network is trained to carry out a regression on audio waveform samples from input to output of a SPICE model of the tube amplifier.

Paper Details

Authors:
Eero-Pekka Damskägg, Lauri Juvela, Etienne Thuillier, Vesa Valimaki
Submitted On:
9 May 2019 - 6:31am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icassp19_deep_learning_for_tube_amplifier_emulation.pdf

(22)

Subscribe

[1] Eero-Pekka Damskägg, Lauri Juvela, Etienne Thuillier, Vesa Valimaki, "Deep Learning for Tube Amplifier Emulation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4186. Accessed: Aug. 23, 2019.
@article{4186-19,
url = {http://sigport.org/4186},
author = {Eero-Pekka Damskägg; Lauri Juvela; Etienne Thuillier; Vesa Valimaki },
publisher = {IEEE SigPort},
title = {Deep Learning for Tube Amplifier Emulation},
year = {2019} }
TY - EJOUR
T1 - Deep Learning for Tube Amplifier Emulation
AU - Eero-Pekka Damskägg; Lauri Juvela; Etienne Thuillier; Vesa Valimaki
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4186
ER -
Eero-Pekka Damskägg, Lauri Juvela, Etienne Thuillier, Vesa Valimaki. (2019). Deep Learning for Tube Amplifier Emulation. IEEE SigPort. http://sigport.org/4186
Eero-Pekka Damskägg, Lauri Juvela, Etienne Thuillier, Vesa Valimaki, 2019. Deep Learning for Tube Amplifier Emulation. Available at: http://sigport.org/4186.
Eero-Pekka Damskägg, Lauri Juvela, Etienne Thuillier, Vesa Valimaki. (2019). "Deep Learning for Tube Amplifier Emulation." Web.
1. Eero-Pekka Damskägg, Lauri Juvela, Etienne Thuillier, Vesa Valimaki. Deep Learning for Tube Amplifier Emulation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4186

Pages