Sorry, you need to enable JavaScript to visit this website.

ICASSP 2020

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The ICASSP 2020 conference will feature world-class presentations by internationally renowned speakers, cutting-edge session topics and provide a fantastic opportunity to network with like-minded professionals from around the world. Visit website.

BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval


Acoustic echoes retrieval is a research topic that is gaining importance in many speech and audio signal processing applications such as speech enhancement, source separation, dereverberation and room geometry estimation. This work proposes a novel approach to blindly retrieve the off-grid timing of early acoustic echoes from a stereophonic recording of an unknown sound source such as speech. It builds on the recent framework of continuous dictionaries.

Paper Details

Authors:
Clement Elvira, Nancy Bertin, Remi Gribonval
Submitted On:
3 May 2020 - 6:48am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2020blaster.pdf

(44)

Subscribe

[1] Clement Elvira, Nancy Bertin, Remi Gribonval, "BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5117. Accessed: Aug. 06, 2020.
@article{5117-20,
url = {http://sigport.org/5117},
author = {Clement Elvira; Nancy Bertin; Remi Gribonval },
publisher = {IEEE SigPort},
title = {BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval},
year = {2020} }
TY - EJOUR
T1 - BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval
AU - Clement Elvira; Nancy Bertin; Remi Gribonval
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5117
ER -
Clement Elvira, Nancy Bertin, Remi Gribonval. (2020). BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval. IEEE SigPort. http://sigport.org/5117
Clement Elvira, Nancy Bertin, Remi Gribonval, 2020. BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval. Available at: http://sigport.org/5117.
Clement Elvira, Nancy Bertin, Remi Gribonval. (2020). "BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval." Web.
1. Clement Elvira, Nancy Bertin, Remi Gribonval. BLASTER: An off-grid method for blind and regularized acoustic echoes retrieval [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5117

Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture

Paper Details

Authors:
Jian Zhou, Subhankar Pal, David Blaauw, Hun Seok Kim, Trevor Mudge, Ronald Dreslinski, Chaitali Chakrabarti
Submitted On:
1 May 2020 - 8:36pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

soorishetty.pdf

(38)

Subscribe

[1] Jian Zhou, Subhankar Pal, David Blaauw, Hun Seok Kim, Trevor Mudge, Ronald Dreslinski, Chaitali Chakrabarti, "Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5116. Accessed: Aug. 06, 2020.
@article{5116-20,
url = {http://sigport.org/5116},
author = {Jian Zhou; Subhankar Pal; David Blaauw; Hun Seok Kim; Trevor Mudge; Ronald Dreslinski; Chaitali Chakrabarti },
publisher = {IEEE SigPort},
title = {Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture},
year = {2020} }
TY - EJOUR
T1 - Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture
AU - Jian Zhou; Subhankar Pal; David Blaauw; Hun Seok Kim; Trevor Mudge; Ronald Dreslinski; Chaitali Chakrabarti
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5116
ER -
Jian Zhou, Subhankar Pal, David Blaauw, Hun Seok Kim, Trevor Mudge, Ronald Dreslinski, Chaitali Chakrabarti. (2020). Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture. IEEE SigPort. http://sigport.org/5116
Jian Zhou, Subhankar Pal, David Blaauw, Hun Seok Kim, Trevor Mudge, Ronald Dreslinski, Chaitali Chakrabarti, 2020. Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture. Available at: http://sigport.org/5116.
Jian Zhou, Subhankar Pal, David Blaauw, Hun Seok Kim, Trevor Mudge, Ronald Dreslinski, Chaitali Chakrabarti. (2020). "Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture." Web.
1. Jian Zhou, Subhankar Pal, David Blaauw, Hun Seok Kim, Trevor Mudge, Ronald Dreslinski, Chaitali Chakrabarti. Accelerating Linear Algebra Kernels on a Massively Parallel Reconfigurable Architecture [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5116

A Hybrid Approach for Thermographic Imaging with Deep Learning


We propose a hybrid method for reconstructing thermographic images by combining the recently developed virtual wave concept with deep neural networks. The method can be used to detect defects inside materials in a non-destructive way. We propose two architectures along with a thorough evaluation that shows a substantial improvement compared to state-of-the-art reconstruction procedures. The virtual waves are invariant of the thermal diffusivity property of the material.

Paper Details

Authors:
Péter Kovács, Bernhard Lehner, Gregor Thummerer, Günther Mayr, Peter Burgholzer, Mario Huemer
Submitted On:
30 April 2020 - 11:14am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation for ICASSP2020

(31)

Subscribe

[1] Péter Kovács, Bernhard Lehner, Gregor Thummerer, Günther Mayr, Peter Burgholzer, Mario Huemer, "A Hybrid Approach for Thermographic Imaging with Deep Learning", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5115. Accessed: Aug. 06, 2020.
@article{5115-20,
url = {http://sigport.org/5115},
author = {Péter Kovács; Bernhard Lehner; Gregor Thummerer; Günther Mayr; Peter Burgholzer; Mario Huemer },
publisher = {IEEE SigPort},
title = {A Hybrid Approach for Thermographic Imaging with Deep Learning},
year = {2020} }
TY - EJOUR
T1 - A Hybrid Approach for Thermographic Imaging with Deep Learning
AU - Péter Kovács; Bernhard Lehner; Gregor Thummerer; Günther Mayr; Peter Burgholzer; Mario Huemer
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5115
ER -
Péter Kovács, Bernhard Lehner, Gregor Thummerer, Günther Mayr, Peter Burgholzer, Mario Huemer. (2020). A Hybrid Approach for Thermographic Imaging with Deep Learning. IEEE SigPort. http://sigport.org/5115
Péter Kovács, Bernhard Lehner, Gregor Thummerer, Günther Mayr, Peter Burgholzer, Mario Huemer, 2020. A Hybrid Approach for Thermographic Imaging with Deep Learning. Available at: http://sigport.org/5115.
Péter Kovács, Bernhard Lehner, Gregor Thummerer, Günther Mayr, Peter Burgholzer, Mario Huemer. (2020). "A Hybrid Approach for Thermographic Imaging with Deep Learning." Web.
1. Péter Kovács, Bernhard Lehner, Gregor Thummerer, Günther Mayr, Peter Burgholzer, Mario Huemer. A Hybrid Approach for Thermographic Imaging with Deep Learning [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5115

Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation)


In this paper, we propose a two-step training procedure for source separation via a deep neural network. In the first step we learn a transform (and it's inverse) to a latent space where masking-based separation performance using oracles is optimal. For the second step, we train a separation module that operates on the previously learned space. In order to do so, we also make use of a scale-invariant signal to distortion ratio (SI-SDR) loss function that works in the latent space, and we prove that it lower-bounds the SI-SDR in the time domain.

Paper Details

Authors:
Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis
Submitted On:
20 April 2020 - 7:15pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

etzinis_icassp2020_twostep_slides.pdf

(65)

Subscribe

[1] Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis, "Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation)", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5112. Accessed: Aug. 06, 2020.
@article{5112-20,
url = {http://sigport.org/5112},
author = {Efthymios Tzinis; Shrikant Venkataramani; Zhepei Wang; Cem Subakan; Paris Smaragdis },
publisher = {IEEE SigPort},
title = {Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation)},
year = {2020} }
TY - EJOUR
T1 - Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation)
AU - Efthymios Tzinis; Shrikant Venkataramani; Zhepei Wang; Cem Subakan; Paris Smaragdis
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5112
ER -
Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis. (2020). Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation). IEEE SigPort. http://sigport.org/5112
Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis, 2020. Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation). Available at: http://sigport.org/5112.
Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis. (2020). "Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation)." Web.
1. Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis. Two-Step Sound Source Separation: Training on Learned Latent Targets (Presentation) [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5112

Improving Universal Sound Separation Using Sound Classification Presentation


Deep learning approaches have recently achieved impressive performance on both audio source separation and sound classification. Most audio source separation approaches focus only on separating sources belonging to a restricted domain of source classes, such as speech and music. However, recent work has demonstrated the possibility of "universal sound separation", which aims to separate acoustic sources from an open domain, regardless of their class.

Paper Details

Authors:
Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis
Submitted On:
3 May 2020 - 10:09pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

etzinis_improving_icassp2020_slides.pdf

(73)

Subscribe

[1] Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis, "Improving Universal Sound Separation Using Sound Classification Presentation", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5111. Accessed: Aug. 06, 2020.
@article{5111-20,
url = {http://sigport.org/5111},
author = {Efthymios Tzinis; Scott Wisdom; John R. Hershey; Aren Jansen; Daniel P. W. Ellis },
publisher = {IEEE SigPort},
title = {Improving Universal Sound Separation Using Sound Classification Presentation},
year = {2020} }
TY - EJOUR
T1 - Improving Universal Sound Separation Using Sound Classification Presentation
AU - Efthymios Tzinis; Scott Wisdom; John R. Hershey; Aren Jansen; Daniel P. W. Ellis
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5111
ER -
Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis. (2020). Improving Universal Sound Separation Using Sound Classification Presentation. IEEE SigPort. http://sigport.org/5111
Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis, 2020. Improving Universal Sound Separation Using Sound Classification Presentation. Available at: http://sigport.org/5111.
Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis. (2020). "Improving Universal Sound Separation Using Sound Classification Presentation." Web.
1. Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis. Improving Universal Sound Separation Using Sound Classification Presentation [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5111

Environment-aware Reconfigurable Noise Suppression


The paper proposes an efficient, robust, and reconfigurable technique to suppress various types of noises for any sampling rate. The theoretical analyses, subjective and objective test results show that the proposed noise suppression (NS) solution significantly enhances the speech transmission index (STI), speech intelligibility (SI), signal-to-noise ratio (SNR), and subjective listening experience. The STI and SI consists of 5 levels, i.e., bad, poor, fair, good, and excellent. The most common noisy condition is of SNR ranging from -5 to 8 dB.

Paper Details

Authors:
Jun Yang, Joshua Bingham
Submitted On:
20 April 2020 - 4:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Facebook Noise Suppression @ ICASSP 2020

(37)

Subscribe

[1] Jun Yang, Joshua Bingham, "Environment-aware Reconfigurable Noise Suppression", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5110. Accessed: Aug. 06, 2020.
@article{5110-20,
url = {http://sigport.org/5110},
author = {Jun Yang; Joshua Bingham },
publisher = {IEEE SigPort},
title = {Environment-aware Reconfigurable Noise Suppression},
year = {2020} }
TY - EJOUR
T1 - Environment-aware Reconfigurable Noise Suppression
AU - Jun Yang; Joshua Bingham
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5110
ER -
Jun Yang, Joshua Bingham. (2020). Environment-aware Reconfigurable Noise Suppression. IEEE SigPort. http://sigport.org/5110
Jun Yang, Joshua Bingham, 2020. Environment-aware Reconfigurable Noise Suppression. Available at: http://sigport.org/5110.
Jun Yang, Joshua Bingham. (2020). "Environment-aware Reconfigurable Noise Suppression." Web.
1. Jun Yang, Joshua Bingham. Environment-aware Reconfigurable Noise Suppression [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5110

Motion Dynamics Improve Speaker-Independent Lipreading


We present a novel lipreading system that improves on the task of speaker-independent word recognition by decoupling motion and content dynamics. We achieve this by implementing a deep learning architecture that uses two distinct pipelines to process motion and content and subsequently merges them, implementing an end-to-end trainable system that performs fusion of independently learned representations. We obtain a average relative word accuracy improvement of ≈6.8% on unseen speakers and of ≈3.3% on known speakers, with respect to a baseline which uses a standard architecture.

Paper Details

Authors:
Matteo Riva, Michael Wand, Jürgen Schmidhuber
Submitted On:
19 April 2020 - 6:19pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation PDF slides

(61)

Subscribe

[1] Matteo Riva, Michael Wand, Jürgen Schmidhuber, "Motion Dynamics Improve Speaker-Independent Lipreading", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5108. Accessed: Aug. 06, 2020.
@article{5108-20,
url = {http://sigport.org/5108},
author = {Matteo Riva; Michael Wand; Jürgen Schmidhuber },
publisher = {IEEE SigPort},
title = {Motion Dynamics Improve Speaker-Independent Lipreading},
year = {2020} }
TY - EJOUR
T1 - Motion Dynamics Improve Speaker-Independent Lipreading
AU - Matteo Riva; Michael Wand; Jürgen Schmidhuber
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5108
ER -
Matteo Riva, Michael Wand, Jürgen Schmidhuber. (2020). Motion Dynamics Improve Speaker-Independent Lipreading. IEEE SigPort. http://sigport.org/5108
Matteo Riva, Michael Wand, Jürgen Schmidhuber, 2020. Motion Dynamics Improve Speaker-Independent Lipreading. Available at: http://sigport.org/5108.
Matteo Riva, Michael Wand, Jürgen Schmidhuber. (2020). "Motion Dynamics Improve Speaker-Independent Lipreading." Web.
1. Matteo Riva, Michael Wand, Jürgen Schmidhuber. Motion Dynamics Improve Speaker-Independent Lipreading [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5108

PEVD-based Speech Enhancement in Reverberant Environments


The enhancement of noisy speech is important for applications involving human-to-human interactions, such as telecommunications and hearing aids, as well as human-to-machine interactions, such as voice-controlled systems and robot audition. In this work, we focus on reverberant environments. It is shown that, by exploiting the lack of correlation between speech and the late reflections, further noise reduction can be achieved. This is verified using simulations involving actual acoustic impulse responses and noise from the ACE corpus.

Paper Details

Authors:
Vincent W. Neo, Christine Evers, Patrick A. Naylor
Submitted On:
18 April 2020 - 12:18pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

[ICASSP2020]_PEVD_based_Speech_Enhancement_in_Reverberant_Environments_Handout.pdf

(38)

Subscribe

[1] Vincent W. Neo, Christine Evers, Patrick A. Naylor, "PEVD-based Speech Enhancement in Reverberant Environments", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5106. Accessed: Aug. 06, 2020.
@article{5106-20,
url = {http://sigport.org/5106},
author = {Vincent W. Neo; Christine Evers; Patrick A. Naylor },
publisher = {IEEE SigPort},
title = {PEVD-based Speech Enhancement in Reverberant Environments},
year = {2020} }
TY - EJOUR
T1 - PEVD-based Speech Enhancement in Reverberant Environments
AU - Vincent W. Neo; Christine Evers; Patrick A. Naylor
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5106
ER -
Vincent W. Neo, Christine Evers, Patrick A. Naylor. (2020). PEVD-based Speech Enhancement in Reverberant Environments. IEEE SigPort. http://sigport.org/5106
Vincent W. Neo, Christine Evers, Patrick A. Naylor, 2020. PEVD-based Speech Enhancement in Reverberant Environments. Available at: http://sigport.org/5106.
Vincent W. Neo, Christine Evers, Patrick A. Naylor. (2020). "PEVD-based Speech Enhancement in Reverberant Environments." Web.
1. Vincent W. Neo, Christine Evers, Patrick A. Naylor. PEVD-based Speech Enhancement in Reverberant Environments [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5106

END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION


In this study, we focus on detecting articulatory attribute errors for dysarthric patients with cerebral palsy (CP) or amyotrophic lateral sclerosis (ALS). There are two major challenges for this task. The pronunciation of dysarthric patients is unclear and inaccurate, which results in poor performances of traditional automatic speech recognition (ASR) systems and traditional automatic speech attribute transcription (ASAT). In addition, the data is limited because of the difficulty of recording.

Paper Details

Authors:
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding
Submitted On:
18 April 2020 - 9:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_poster_lin.pdf

(46)

Subscribe

[1] Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding, "END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5105. Accessed: Aug. 06, 2020.
@article{5105-20,
url = {http://sigport.org/5105},
author = {Yuqin Lin; Longbiao Wang; Jianwu Dang; Sheng Li; Chenchen Ding },
publisher = {IEEE SigPort},
title = {END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION},
year = {2020} }
TY - EJOUR
T1 - END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION
AU - Yuqin Lin; Longbiao Wang; Jianwu Dang; Sheng Li; Chenchen Ding
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5105
ER -
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding. (2020). END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION. IEEE SigPort. http://sigport.org/5105
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding, 2020. END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION. Available at: http://sigport.org/5105.
Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding. (2020). "END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION." Web.
1. Yuqin Lin, Longbiao Wang, Jianwu Dang, Sheng Li, Chenchen Ding. END-TO-END ARTICULATORY MODELING FOR DYSARTHRIC ARTICULATORY ATTRIBUTE DETECTION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5105

VaPar Synth - A Variational Parametric Model for Audio Synthesis


With the advent of data-driven statistical modeling and abundant computing power, researchers are turning increasingly to deep learning for audio synthesis. These methods try to model audio signals directly in the time or frequency domain. In the interest of more flexible control over the generated sound, it could be more useful to work with a parametric representation of the signal which corresponds more directly to the musical attributes such as pitch, dynamics and timbre.

Paper Details

Authors:
Krishna Subramani, Preeti Rao, Alexandre D'Hooge
Submitted On:
18 April 2020 - 2:10am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation Slides

(42)

Subscribe

[1] Krishna Subramani, Preeti Rao, Alexandre D'Hooge, "VaPar Synth - A Variational Parametric Model for Audio Synthesis", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5104. Accessed: Aug. 06, 2020.
@article{5104-20,
url = {http://sigport.org/5104},
author = {Krishna Subramani; Preeti Rao; Alexandre D'Hooge },
publisher = {IEEE SigPort},
title = {VaPar Synth - A Variational Parametric Model for Audio Synthesis},
year = {2020} }
TY - EJOUR
T1 - VaPar Synth - A Variational Parametric Model for Audio Synthesis
AU - Krishna Subramani; Preeti Rao; Alexandre D'Hooge
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5104
ER -
Krishna Subramani, Preeti Rao, Alexandre D'Hooge. (2020). VaPar Synth - A Variational Parametric Model for Audio Synthesis. IEEE SigPort. http://sigport.org/5104
Krishna Subramani, Preeti Rao, Alexandre D'Hooge, 2020. VaPar Synth - A Variational Parametric Model for Audio Synthesis. Available at: http://sigport.org/5104.
Krishna Subramani, Preeti Rao, Alexandre D'Hooge. (2020). "VaPar Synth - A Variational Parametric Model for Audio Synthesis." Web.
1. Krishna Subramani, Preeti Rao, Alexandre D'Hooge. VaPar Synth - A Variational Parametric Model for Audio Synthesis [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5104

Pages