Sorry, you need to enable JavaScript to visit this website.

Speaker Recognition and Characterization (SPE-SPKR)

Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization


Adapting speaker verification (SV) systems to a new environ- ment is a very challenging task. Current adaptation methods in SV mainly focus on the backend, i.e, adaptation is carried out after the speaker embeddings have been created. In this paper, we present a DNN-based adaptation method using maximum mean discrepancy (MMD). Our method exploits two important aspects neglected by previous research.

Paper Details

Authors:
Weiwei Lin, Man-Mai Mak, Na Li, Dan Su, Dong Yu
Submitted On:
13 May 2020 - 10:05pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

3043-LinMakLiSuYu.pdf

(8)

Subscribe

[1] Weiwei Lin, Man-Mai Mak, Na Li, Dan Su, Dong Yu, "Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5193. Accessed: May. 30, 2020.
@article{5193-20,
url = {http://sigport.org/5193},
author = {Weiwei Lin; Man-Mai Mak; Na Li; Dan Su; Dong Yu },
publisher = {IEEE SigPort},
title = {Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization},
year = {2020} }
TY - EJOUR
T1 - Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization
AU - Weiwei Lin; Man-Mai Mak; Na Li; Dan Su; Dong Yu
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5193
ER -
Weiwei Lin, Man-Mai Mak, Na Li, Dan Su, Dong Yu. (2020). Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization. IEEE SigPort. http://sigport.org/5193
Weiwei Lin, Man-Mai Mak, Na Li, Dan Su, Dong Yu, 2020. Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization. Available at: http://sigport.org/5193.
Weiwei Lin, Man-Mai Mak, Na Li, Dan Su, Dong Yu. (2020). "Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization." Web.
1. Weiwei Lin, Man-Mai Mak, Na Li, Dan Su, Dong Yu. Multi-level deep neural network adaptation for speaker verification using MMD and consistency regularization [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5193

Information Maximized Variational Domain Adversarial Learning for Speaker Verification


Domain mismatch is a common problem in speaker ver- ification. This paper proposes an information-maximized variational domain adversarial neural network (InfoVDANN) to reduce domain mismatch by incorporating an InfoVAE into domain adversarial training (DAT). DAT aims to pro- duce speaker discriminative and domain-invariant features. The InfoVAE has two roles. First, it performs variational regularization on the learned features so that they follow a Gaussian distribution, which is essential for the standard PLDA backend.

Paper Details

Authors:
Youzhi Tu, Man-Wai Mak, Jen-Tzung Chien
Submitted On:
13 May 2020 - 10:02pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

5091-TuMakChien.pdf

(9)

Subscribe

[1] Youzhi Tu, Man-Wai Mak, Jen-Tzung Chien, "Information Maximized Variational Domain Adversarial Learning for Speaker Verification", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5191. Accessed: May. 30, 2020.
@article{5191-20,
url = {http://sigport.org/5191},
author = {Youzhi Tu; Man-Wai Mak; Jen-Tzung Chien },
publisher = {IEEE SigPort},
title = {Information Maximized Variational Domain Adversarial Learning for Speaker Verification},
year = {2020} }
TY - EJOUR
T1 - Information Maximized Variational Domain Adversarial Learning for Speaker Verification
AU - Youzhi Tu; Man-Wai Mak; Jen-Tzung Chien
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5191
ER -
Youzhi Tu, Man-Wai Mak, Jen-Tzung Chien. (2020). Information Maximized Variational Domain Adversarial Learning for Speaker Verification. IEEE SigPort. http://sigport.org/5191
Youzhi Tu, Man-Wai Mak, Jen-Tzung Chien, 2020. Information Maximized Variational Domain Adversarial Learning for Speaker Verification. Available at: http://sigport.org/5191.
Youzhi Tu, Man-Wai Mak, Jen-Tzung Chien. (2020). "Information Maximized Variational Domain Adversarial Learning for Speaker Verification." Web.
1. Youzhi Tu, Man-Wai Mak, Jen-Tzung Chien. Information Maximized Variational Domain Adversarial Learning for Speaker Verification [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5191

ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES


A text-independent speaker verification system suffers severe performance degradation under short utterance condition. To address the problem, in this paper, we propose an adversarially learned embedding mapping model that directly maps a short embedding to an enhanced embedding with increased discriminability. In particular, a Wasserstein GAN with a bunch of loss criteria are investigated. These loss functions have distinct optimization objectives and some of them are less favoured for the speaker verification research area.

Paper Details

Authors:
KAI LIU, HUAN ZHOU
Submitted On:
13 May 2020 - 9:57pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Adversarial Learning, Speaker Recognition, Short utterances

(11)

Subscribe

[1] KAI LIU, HUAN ZHOU, "ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5185. Accessed: May. 30, 2020.
@article{5185-20,
url = {http://sigport.org/5185},
author = {KAI LIU; HUAN ZHOU },
publisher = {IEEE SigPort},
title = {ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES},
year = {2020} }
TY - EJOUR
T1 - ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES
AU - KAI LIU; HUAN ZHOU
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5185
ER -
KAI LIU, HUAN ZHOU. (2020). ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES. IEEE SigPort. http://sigport.org/5185
KAI LIU, HUAN ZHOU, 2020. ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES. Available at: http://sigport.org/5185.
KAI LIU, HUAN ZHOU. (2020). "ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES." Web.
1. KAI LIU, HUAN ZHOU. ICASSP2020 TEXT-INDEPENDENT SPEAKER VERIFICATION WITH ADVERSARIAL LEARNING ON SHORT UTTERANCES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5185

A Memory Augmented Architecture For Continuous Speaker Identification In Meetings


We introduce and analyze a novel approach to the problem of speaker identification in multi-party recorded meetings. Given a speech segment and a set of available candidate profiles, a data-driven approach is proposed learning the distance relations between them, aiming at identifying the correct speaker label corresponding to that segment. A recurrent, memory-based architecture is employed, since this class of neural networks has been shown to yield improved performance in problems requiring relational reasoning.

Paper Details

Authors:
Dimitrios Dimitriadis
Submitted On:
13 May 2020 - 9:46pm
Short Link:
Type:
Event:

Document Files

2020_ICASSP_RMC_MSR_pres.pdf

(7)

Subscribe

[1] Dimitrios Dimitriadis, "A Memory Augmented Architecture For Continuous Speaker Identification In Meetings", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5184. Accessed: May. 30, 2020.
@article{5184-20,
url = {http://sigport.org/5184},
author = {Dimitrios Dimitriadis },
publisher = {IEEE SigPort},
title = {A Memory Augmented Architecture For Continuous Speaker Identification In Meetings},
year = {2020} }
TY - EJOUR
T1 - A Memory Augmented Architecture For Continuous Speaker Identification In Meetings
AU - Dimitrios Dimitriadis
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5184
ER -
Dimitrios Dimitriadis. (2020). A Memory Augmented Architecture For Continuous Speaker Identification In Meetings. IEEE SigPort. http://sigport.org/5184
Dimitrios Dimitriadis, 2020. A Memory Augmented Architecture For Continuous Speaker Identification In Meetings. Available at: http://sigport.org/5184.
Dimitrios Dimitriadis. (2020). "A Memory Augmented Architecture For Continuous Speaker Identification In Meetings." Web.
1. Dimitrios Dimitriadis. A Memory Augmented Architecture For Continuous Speaker Identification In Meetings [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5184

Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks


Deep speaker embedding models have been commonly used as a building block for speaker diarization systems; however, the speaker embedding model is usually trained according to a global loss defined on the training data, which could be sub-optimal for distinguishing speakers locally in a specific meeting session. In this work we present the first use of graph neural networks (GNNs) for the speaker diarization problem, utilizing a GNN to refine speaker embeddings locally using the structural information between speech segments inside each session.

Paper Details

Authors:
Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno
Submitted On:
13 May 2020 - 8:20pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Slides for the paper "Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks"

(12)

Subscribe

[1] Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno, "Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5168. Accessed: May. 30, 2020.
@article{5168-20,
url = {http://sigport.org/5168},
author = {Jixuan Wang; Xiong Xiao; Jian Wu; Ranjani Ramamurthy; Frank Rudzicz; Michael Brudno },
publisher = {IEEE SigPort},
title = {Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks},
year = {2020} }
TY - EJOUR
T1 - Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks
AU - Jixuan Wang; Xiong Xiao; Jian Wu; Ranjani Ramamurthy; Frank Rudzicz; Michael Brudno
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5168
ER -
Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno. (2020). Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks. IEEE SigPort. http://sigport.org/5168
Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno, 2020. Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks. Available at: http://sigport.org/5168.
Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno. (2020). "Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks." Web.
1. Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno. Speaker Diarization with Session-level Speaker Embedding Refinement using Graph Neural Networks [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5168

An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers


As automatic speaker recognizer systems become mainstream, voice spoofing attacks are on the rise. Common attack strategies include replay, the use of text-to-speech synthesis, and voice conversion systems. While previously-proposed end-to-end detection frameworks have shown to be effective in spotting attacks for one particular spoofing strategy, they have relied on different models, architectures, and speech representations, depending on the spoofing strategy.

Paper Details

Authors:
Jahangir Alam,Tiago Falk
Submitted On:
13 May 2020 - 5:21pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_Spoofing.pdf

(7)

Subscribe

[1] Jahangir Alam,Tiago Falk, "An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5149. Accessed: May. 30, 2020.
@article{5149-20,
url = {http://sigport.org/5149},
author = {Jahangir Alam;Tiago Falk },
publisher = {IEEE SigPort},
title = {An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers},
year = {2020} }
TY - EJOUR
T1 - An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers
AU - Jahangir Alam;Tiago Falk
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5149
ER -
Jahangir Alam,Tiago Falk. (2020). An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers. IEEE SigPort. http://sigport.org/5149
Jahangir Alam,Tiago Falk, 2020. An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers. Available at: http://sigport.org/5149.
Jahangir Alam,Tiago Falk. (2020). "An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers." Web.
1. Jahangir Alam,Tiago Falk. An ensemble Based Approach for Generalized Detection of Spoofing Attacks to Automatic Speaker Recognizers [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5149

Meta Learning for Robust Child/Adult Classification from Speech


Computational modeling of naturalistic conversations in clinical applications has seen growing interest in the past decade. An important use-case involves child-adult interactions within the autism diagnosis and intervention domain. In this paper, we address a specific sub-problem of speaker diarization, namely child-adult speaker classification in such dyadic conversations with specified roles. Training a speaker classification system robust to speaker and channel conditions is challenging due to inherent variability in the speech within children and the adult interlocutors.

Paper Details

Authors:
Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan
Submitted On:
13 May 2020 - 5:07pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

metaLearning_slides.pdf

(7)

Subscribe

[1] Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan, "Meta Learning for Robust Child/Adult Classification from Speech", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5145. Accessed: May. 30, 2020.
@article{5145-20,
url = {http://sigport.org/5145},
author = {Nithin Rao Koluguri; Manoj Kumar; So Hyun Kim; Catherine Lord; Shrikanth Narayanan },
publisher = {IEEE SigPort},
title = {Meta Learning for Robust Child/Adult Classification from Speech},
year = {2020} }
TY - EJOUR
T1 - Meta Learning for Robust Child/Adult Classification from Speech
AU - Nithin Rao Koluguri; Manoj Kumar; So Hyun Kim; Catherine Lord; Shrikanth Narayanan
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5145
ER -
Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan. (2020). Meta Learning for Robust Child/Adult Classification from Speech. IEEE SigPort. http://sigport.org/5145
Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan, 2020. Meta Learning for Robust Child/Adult Classification from Speech. Available at: http://sigport.org/5145.
Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan. (2020). "Meta Learning for Robust Child/Adult Classification from Speech." Web.
1. Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan. Meta Learning for Robust Child/Adult Classification from Speech [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5145

Robust speaker recognition using unsupervised adversarial invariance


In this paper, we address the problem of speaker recognition in challenging acoustic conditions using a novel method to extract robust speaker-discriminative speech representations. We adopt a recently proposed unsupervised adversarial invariance architecture to train a network that maps speaker embeddings extracted using a pre-trained model onto two lower dimensional embedding spaces. The embedding spaces are learnt to disentangle speaker-discriminative information from all other information present in the audio recordings, without supervision about the acoustic conditions.

Paper Details

Authors:
Monisankha Pal, Shrikanth Narayanan
Submitted On:
5 May 2020 - 1:34am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Raghuveer_Peri_ICASSP2020.pdf

(13)

Subscribe

[1] Monisankha Pal, Shrikanth Narayanan, "Robust speaker recognition using unsupervised adversarial invariance", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5123. Accessed: May. 30, 2020.
@article{5123-20,
url = {http://sigport.org/5123},
author = {Monisankha Pal; Shrikanth Narayanan },
publisher = {IEEE SigPort},
title = {Robust speaker recognition using unsupervised adversarial invariance},
year = {2020} }
TY - EJOUR
T1 - Robust speaker recognition using unsupervised adversarial invariance
AU - Monisankha Pal; Shrikanth Narayanan
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5123
ER -
Monisankha Pal, Shrikanth Narayanan. (2020). Robust speaker recognition using unsupervised adversarial invariance. IEEE SigPort. http://sigport.org/5123
Monisankha Pal, Shrikanth Narayanan, 2020. Robust speaker recognition using unsupervised adversarial invariance. Available at: http://sigport.org/5123.
Monisankha Pal, Shrikanth Narayanan. (2020). "Robust speaker recognition using unsupervised adversarial invariance." Web.
1. Monisankha Pal, Shrikanth Narayanan. Robust speaker recognition using unsupervised adversarial invariance [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5123

Robust speaker recognition using unsupervised adversarial invariance


In this paper, we address the problem of speaker recognition in challenging acoustic conditions using a novel method to extract robust speaker-discriminative speech representations. We adopt a recently proposed unsupervised adversarial invariance architecture to train a network that maps speaker embeddings extracted using a pre-trained model onto two lower dimensional embedding spaces. The embedding spaces are learnt to disentangle speaker-discriminative information from all other information present in the audio recordings, without supervision about the acoustic conditions.

Paper Details

Authors:
Monisankha Pal, Shrikanth Narayanan
Submitted On:
5 May 2020 - 1:33am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Raghuveer_Peri_ICASSP2020.pdf

(10)

Subscribe

[1] Monisankha Pal, Shrikanth Narayanan, "Robust speaker recognition using unsupervised adversarial invariance", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5122. Accessed: May. 30, 2020.
@article{5122-20,
url = {http://sigport.org/5122},
author = {Monisankha Pal; Shrikanth Narayanan },
publisher = {IEEE SigPort},
title = {Robust speaker recognition using unsupervised adversarial invariance},
year = {2020} }
TY - EJOUR
T1 - Robust speaker recognition using unsupervised adversarial invariance
AU - Monisankha Pal; Shrikanth Narayanan
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5122
ER -
Monisankha Pal, Shrikanth Narayanan. (2020). Robust speaker recognition using unsupervised adversarial invariance. IEEE SigPort. http://sigport.org/5122
Monisankha Pal, Shrikanth Narayanan, 2020. Robust speaker recognition using unsupervised adversarial invariance. Available at: http://sigport.org/5122.
Monisankha Pal, Shrikanth Narayanan. (2020). "Robust speaker recognition using unsupervised adversarial invariance." Web.
1. Monisankha Pal, Shrikanth Narayanan. Robust speaker recognition using unsupervised adversarial invariance [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5122

AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES


This paper presents an improved deep embedding learning method based on a convolutional neural network (CNN) for text-independent speaker verification. Two improvements are proposed for x-vector embedding learning: (1) a multiscale convolution (MSCNN) is adopted in the frame-level layers to capture the complementary speaker information in different receptive fields; (2) a Baum-Welch statistics attention (BWSA) mechanism is applied in the pooling layer, which can integrate more useful long-term speaker characteristics in the temporal pooling layer.

Paper Details

Authors:
Submitted On:
14 April 2020 - 6:25am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP2020_poster_BinGu.pdf

(23)

Subscribe

[1] , "AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5095. Accessed: May. 30, 2020.
@article{5095-20,
url = {http://sigport.org/5095},
author = { },
publisher = {IEEE SigPort},
title = {AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES},
year = {2020} }
TY - EJOUR
T1 - AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES
AU -
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5095
ER -
. (2020). AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES. IEEE SigPort. http://sigport.org/5095
, 2020. AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES. Available at: http://sigport.org/5095.
. (2020). "AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES." Web.
1. . AN IMPROVED DEEP NEURAL NETWORK FOR MODELING SPEAKER CHARACTERISTICS AT DIFFERENT TEMPORAL SCALES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5095

Pages