Sorry, you need to enable JavaScript to visit this website.

Spatial and Multichannel Audio

Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)


With the strong growth of assistive and personal listening devices, natural sound rendering over headphones is becoming a necessity for prolonged listening in multimedia and virtual reality applications. The aim of natural sound rendering is to naturally recreate the sound scenes with the spatial and timbral quality as natural as possible, so as to achieve a truly immersive listening experience. However, rendering natural sound over headphones encounters many challenges. This tutorial article presents signal processing techniques to tackle these challenges to assist human listening.

Paper Details

Authors:
Kaushik Sunder, Ee-Leng Tan
Submitted On:
23 February 2016 - 1:43pm
Short Link:
Type:

Document Files

SPM15slides_Natural Sound Rendering for Headphones.pdf

(591 downloads)

Keywords

Subscribe

[1] Kaushik Sunder, Ee-Leng Tan, "Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/167. Accessed: May. 24, 2018.
@article{167-15,
url = {http://sigport.org/167},
author = {Kaushik Sunder; Ee-Leng Tan },
publisher = {IEEE SigPort},
title = {Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)},
year = {2015} }
TY - EJOUR
T1 - Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)
AU - Kaushik Sunder; Ee-Leng Tan
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/167
ER -
Kaushik Sunder, Ee-Leng Tan. (2015). Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides). IEEE SigPort. http://sigport.org/167
Kaushik Sunder, Ee-Leng Tan, 2015. Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides). Available at: http://sigport.org/167.
Kaushik Sunder, Ee-Leng Tan. (2015). "Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)." Web.
1. Kaushik Sunder, Ee-Leng Tan. Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides) [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/167

CASCADE: Channel-Aware Structured Cosparse Audio DEclipper

Paper Details

Authors:
Clément Gaultier, Nancy Bertin, Rémi Gribonval
Submitted On:
25 April 2018 - 4:14am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

CASCADE

(20 downloads)

Keywords

Subscribe

[1] Clément Gaultier, Nancy Bertin, Rémi Gribonval, "CASCADE: Channel-Aware Structured Cosparse Audio DEclipper", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3175. Accessed: May. 24, 2018.
@article{3175-18,
url = {http://sigport.org/3175},
author = {Clément Gaultier; Nancy Bertin; Rémi Gribonval },
publisher = {IEEE SigPort},
title = {CASCADE: Channel-Aware Structured Cosparse Audio DEclipper},
year = {2018} }
TY - EJOUR
T1 - CASCADE: Channel-Aware Structured Cosparse Audio DEclipper
AU - Clément Gaultier; Nancy Bertin; Rémi Gribonval
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3175
ER -
Clément Gaultier, Nancy Bertin, Rémi Gribonval. (2018). CASCADE: Channel-Aware Structured Cosparse Audio DEclipper. IEEE SigPort. http://sigport.org/3175
Clément Gaultier, Nancy Bertin, Rémi Gribonval, 2018. CASCADE: Channel-Aware Structured Cosparse Audio DEclipper. Available at: http://sigport.org/3175.
Clément Gaultier, Nancy Bertin, Rémi Gribonval. (2018). "CASCADE: Channel-Aware Structured Cosparse Audio DEclipper." Web.
1. Clément Gaultier, Nancy Bertin, Rémi Gribonval. CASCADE: Channel-Aware Structured Cosparse Audio DEclipper [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3175

Pyroomacoustics: A Python package for audio room simulation and array processing algorithms


We present pyroomacoustics, a software package aimed at the rapid development and testing of audio array processing algorithms.

poster.pdf

PDF icon poster.pdf (28 downloads)

Paper Details

Authors:
Robin Scheibler, Eric Bezzam, Ivan Dokmanic
Submitted On:
23 April 2018 - 4:18am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(28 downloads)

Keywords

Subscribe

[1] Robin Scheibler, Eric Bezzam, Ivan Dokmanic, "Pyroomacoustics: A Python package for audio room simulation and array processing algorithms", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3146. Accessed: May. 24, 2018.
@article{3146-18,
url = {http://sigport.org/3146},
author = {Robin Scheibler; Eric Bezzam; Ivan Dokmanic },
publisher = {IEEE SigPort},
title = {Pyroomacoustics: A Python package for audio room simulation and array processing algorithms},
year = {2018} }
TY - EJOUR
T1 - Pyroomacoustics: A Python package for audio room simulation and array processing algorithms
AU - Robin Scheibler; Eric Bezzam; Ivan Dokmanic
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3146
ER -
Robin Scheibler, Eric Bezzam, Ivan Dokmanic. (2018). Pyroomacoustics: A Python package for audio room simulation and array processing algorithms. IEEE SigPort. http://sigport.org/3146
Robin Scheibler, Eric Bezzam, Ivan Dokmanic, 2018. Pyroomacoustics: A Python package for audio room simulation and array processing algorithms. Available at: http://sigport.org/3146.
Robin Scheibler, Eric Bezzam, Ivan Dokmanic. (2018). "Pyroomacoustics: A Python package for audio room simulation and array processing algorithms." Web.
1. Robin Scheibler, Eric Bezzam, Ivan Dokmanic. Pyroomacoustics: A Python package for audio room simulation and array processing algorithms [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3146

MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION


Active noise control (ANC) over a sizeable space requires a large number of reference and error microphones to satisfy the spatial Nyquist sampling criterion, which limits the feasibility of practical realization of such systems. This paper proposes a mode-domain feedforward ANC method to attenuate the noise field over a large space while reducing the number of microphones required.

Paper Details

Authors:
Yuki Mitsufuji, Thushara Abhayapala
Submitted On:
20 April 2018 - 5:10pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_poster.pdf

(21 downloads)

Keywords

Subscribe

[1] Yuki Mitsufuji, Thushara Abhayapala, "MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3115. Accessed: May. 24, 2018.
@article{3115-18,
url = {http://sigport.org/3115},
author = {Yuki Mitsufuji; Thushara Abhayapala },
publisher = {IEEE SigPort},
title = {MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION},
year = {2018} }
TY - EJOUR
T1 - MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION
AU - Yuki Mitsufuji; Thushara Abhayapala
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3115
ER -
Yuki Mitsufuji, Thushara Abhayapala. (2018). MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION. IEEE SigPort. http://sigport.org/3115
Yuki Mitsufuji, Thushara Abhayapala, 2018. MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION. Available at: http://sigport.org/3115.
Yuki Mitsufuji, Thushara Abhayapala. (2018). "MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION." Web.
1. Yuki Mitsufuji, Thushara Abhayapala. MODE DOMAIN SPATIAL ACTIVE NOISE CONTROL USING SPARSE SIGNAL REPRESENTATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3115

MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS


We present a source separation system for high-order ambisonics (HOA) contents. We derive a multichannel spatial filter from a mask estimated by a long short-term memory (LSTM) recurrent neural network. We combine one channel of the mixture with the outputs of basic HOA beamformers as inputs to the LSTM, assuming that we know the directions of arrival of the directional sources. In our experiments, the speech of interest can be corrupted either by diffuse noise or by an equally loud competing speaker.

perotin.pdf

PDF icon perotin.pdf (26 downloads)

Paper Details

Authors:
Emmanuel Vincent, Alexandre Guérin
Submitted On:
19 April 2018 - 5:18pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

perotin.pdf

(26 downloads)

Keywords

Subscribe

[1] Emmanuel Vincent, Alexandre Guérin, "MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3031. Accessed: May. 24, 2018.
@article{3031-18,
url = {http://sigport.org/3031},
author = {Emmanuel Vincent; Alexandre Guérin },
publisher = {IEEE SigPort},
title = {MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS},
year = {2018} }
TY - EJOUR
T1 - MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS
AU - Emmanuel Vincent; Alexandre Guérin
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3031
ER -
Emmanuel Vincent, Alexandre Guérin. (2018). MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS. IEEE SigPort. http://sigport.org/3031
Emmanuel Vincent, Alexandre Guérin, 2018. MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS. Available at: http://sigport.org/3031.
Emmanuel Vincent, Alexandre Guérin. (2018). "MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS." Web.
1. Emmanuel Vincent, Alexandre Guérin. MULTICHANNEL SPEECH SEPARATION WITH RECURRENT NEURAL NETWORKS FROM HIGH-ORDER AMBISONICS RECORDINGS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3031

Spatial audio feature discovery with convolutional neural network


The advent of mixed reality consumer products brings about a pressing need to develop and improve spatial sound rendering techniques for a broad user base. Despite a large body of prior work, the precise nature and importance of various sound localization cues and how they should be personalized for an individual user to improve localization performance is still an open research problem. Here we propose training a convolutional neural network (CNN) to classify the elevation angle of spatially rendered sounds and employing Layerwise Relevance Propagation (LRP) on the trained CNN model.

Paper Details

Authors:
Etienne Thuillier, Hannes Gamper, Ivan J. Tashev
Submitted On:
20 April 2018 - 11:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Spatial_audio_feature_discovery_ICASSP_2018.pdf

(20 downloads)

Keywords

Subscribe

[1] Etienne Thuillier, Hannes Gamper, Ivan J. Tashev, "Spatial audio feature discovery with convolutional neural network", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2975. Accessed: May. 24, 2018.
@article{2975-18,
url = {http://sigport.org/2975},
author = {Etienne Thuillier; Hannes Gamper; Ivan J. Tashev },
publisher = {IEEE SigPort},
title = {Spatial audio feature discovery with convolutional neural network},
year = {2018} }
TY - EJOUR
T1 - Spatial audio feature discovery with convolutional neural network
AU - Etienne Thuillier; Hannes Gamper; Ivan J. Tashev
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2975
ER -
Etienne Thuillier, Hannes Gamper, Ivan J. Tashev. (2018). Spatial audio feature discovery with convolutional neural network. IEEE SigPort. http://sigport.org/2975
Etienne Thuillier, Hannes Gamper, Ivan J. Tashev, 2018. Spatial audio feature discovery with convolutional neural network. Available at: http://sigport.org/2975.
Etienne Thuillier, Hannes Gamper, Ivan J. Tashev. (2018). "Spatial audio feature discovery with convolutional neural network." Web.
1. Etienne Thuillier, Hannes Gamper, Ivan J. Tashev. Spatial audio feature discovery with convolutional neural network [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2975

Considerations regarding individualization of head-related transfer functions


This paper provides some considerations regarding using individualized head-related transfer functions for rendering binaural spatial audio over headphones. It briefly considers the degree of benefit that individualization may provide. It then examines the degree of variation existing within the ear morphology across listeners within the Sydney-York Morphological and Recording of Ears (SYMARE) database using kernel principal component analysis and the large deformation diffeomorphic metric mapping framework.

Paper Details

Authors:
Reza Zolfaghari, Xian Long, Arun Sebastian, Shayikh Hossain, Alexis Glaunes, Anthony Tew, Muhammad Shahnawaz, Augusto Sarti
Submitted On:
18 April 2018 - 4:22pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

CJinICASSP2018.pdf

(26 downloads)

Keywords

Subscribe

[1] Reza Zolfaghari, Xian Long, Arun Sebastian, Shayikh Hossain, Alexis Glaunes, Anthony Tew, Muhammad Shahnawaz, Augusto Sarti, "Considerations regarding individualization of head-related transfer functions", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2972. Accessed: May. 24, 2018.
@article{2972-18,
url = {http://sigport.org/2972},
author = {Reza Zolfaghari; Xian Long; Arun Sebastian; Shayikh Hossain; Alexis Glaunes; Anthony Tew; Muhammad Shahnawaz; Augusto Sarti },
publisher = {IEEE SigPort},
title = {Considerations regarding individualization of head-related transfer functions},
year = {2018} }
TY - EJOUR
T1 - Considerations regarding individualization of head-related transfer functions
AU - Reza Zolfaghari; Xian Long; Arun Sebastian; Shayikh Hossain; Alexis Glaunes; Anthony Tew; Muhammad Shahnawaz; Augusto Sarti
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2972
ER -
Reza Zolfaghari, Xian Long, Arun Sebastian, Shayikh Hossain, Alexis Glaunes, Anthony Tew, Muhammad Shahnawaz, Augusto Sarti. (2018). Considerations regarding individualization of head-related transfer functions. IEEE SigPort. http://sigport.org/2972
Reza Zolfaghari, Xian Long, Arun Sebastian, Shayikh Hossain, Alexis Glaunes, Anthony Tew, Muhammad Shahnawaz, Augusto Sarti, 2018. Considerations regarding individualization of head-related transfer functions. Available at: http://sigport.org/2972.
Reza Zolfaghari, Xian Long, Arun Sebastian, Shayikh Hossain, Alexis Glaunes, Anthony Tew, Muhammad Shahnawaz, Augusto Sarti. (2018). "Considerations regarding individualization of head-related transfer functions." Web.
1. Reza Zolfaghari, Xian Long, Arun Sebastian, Shayikh Hossain, Alexis Glaunes, Anthony Tew, Muhammad Shahnawaz, Augusto Sarti. Considerations regarding individualization of head-related transfer functions [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2972

AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY

Paper Details

Authors:
Nguyen Duy Hai, Santi Peksi, Rishabh Ranjan, Jianjun He, Boon Siang Tan, Rishabh Gupta, Woon-Seng Gan
Submitted On:
18 April 2018 - 3:21am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP 2018 POSTER_Final.pdf

(31 downloads)

Keywords

Subscribe

[1] Nguyen Duy Hai, Santi Peksi, Rishabh Ranjan, Jianjun He, Boon Siang Tan, Rishabh Gupta, Woon-Seng Gan, "AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2961. Accessed: May. 24, 2018.
@article{2961-18,
url = {http://sigport.org/2961},
author = {Nguyen Duy Hai; Santi Peksi; Rishabh Ranjan; Jianjun He; Boon Siang Tan; Rishabh Gupta; Woon-Seng Gan },
publisher = {IEEE SigPort},
title = {AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY},
year = {2018} }
TY - EJOUR
T1 - AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY
AU - Nguyen Duy Hai; Santi Peksi; Rishabh Ranjan; Jianjun He; Boon Siang Tan; Rishabh Gupta; Woon-Seng Gan
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2961
ER -
Nguyen Duy Hai, Santi Peksi, Rishabh Ranjan, Jianjun He, Boon Siang Tan, Rishabh Gupta, Woon-Seng Gan. (2018). AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY. IEEE SigPort. http://sigport.org/2961
Nguyen Duy Hai, Santi Peksi, Rishabh Ranjan, Jianjun He, Boon Siang Tan, Rishabh Gupta, Woon-Seng Gan, 2018. AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY. Available at: http://sigport.org/2961.
Nguyen Duy Hai, Santi Peksi, Rishabh Ranjan, Jianjun He, Boon Siang Tan, Rishabh Gupta, Woon-Seng Gan. (2018). "AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY." Web.
1. Nguyen Duy Hai, Santi Peksi, Rishabh Ranjan, Jianjun He, Boon Siang Tan, Rishabh Gupta, Woon-Seng Gan. AN IMMERSIVE 3D AUDIO HEADSET FOR VIRTUAL AND AUGMENTED REALITY [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2961

ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR


This tutorial aims to equip the participants with basic and advanced signal processing techniques that can be used in VR/AR applications to create a natural and augmented listening experience using headsets.
This tutorial is divided into 5 sections and cover following topics:
Introduction to spatial audio, fundamentals in natural listening, and emerging audio applications

Paper Details

Authors:
Rishabh Ranjan, Rishabh Gupta
Submitted On:
18 April 2018 - 12:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_Tutorial_T11_Natual_and_Augmented_Listening_for_VR_AR_MR.pdf

(39 downloads)

Keywords

Subscribe

[1] Rishabh Ranjan, Rishabh Gupta, "ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2958. Accessed: May. 24, 2018.
@article{2958-18,
url = {http://sigport.org/2958},
author = {Rishabh Ranjan; Rishabh Gupta },
publisher = {IEEE SigPort},
title = {ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR},
year = {2018} }
TY - EJOUR
T1 - ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR
AU - Rishabh Ranjan; Rishabh Gupta
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2958
ER -
Rishabh Ranjan, Rishabh Gupta. (2018). ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR. IEEE SigPort. http://sigport.org/2958
Rishabh Ranjan, Rishabh Gupta, 2018. ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR. Available at: http://sigport.org/2958.
Rishabh Ranjan, Rishabh Gupta. (2018). "ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR." Web.
1. Rishabh Ranjan, Rishabh Gupta. ICASSP 2018 Tutorial T11 Natual and Augmented Listening for VR/AR/MR [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2958

A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS


Sound zones are typically created using Acoustic Contrast Control (ACC), Pressure Matching (PM), or variations of the two. ACC maximizes the acoustic potential energy contrast between a listening zone and a quiet zone. Although the contrast is maximized, the phase is not controlled. To control both the amplitude and the phase, PM instead minimizes the difference between the reproduced sound field and the desired sound field in all zones.

Paper Details

Authors:
Taewoong Lee, Jesper Kjær Nielsen, Jesper Rindom Jensen, and Mads Græsbøll Christensen
Submitted On:
18 April 2018 - 2:57pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

[Poster] VAST_ICASSP2018_final+.pdf

(24 downloads)

Keywords

Subscribe

[1] Taewoong Lee, Jesper Kjær Nielsen, Jesper Rindom Jensen, and Mads Græsbøll Christensen, "A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2925. Accessed: May. 24, 2018.
@article{2925-18,
url = {http://sigport.org/2925},
author = {Taewoong Lee; Jesper Kjær Nielsen; Jesper Rindom Jensen; and Mads Græsbøll Christensen },
publisher = {IEEE SigPort},
title = {A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS},
year = {2018} }
TY - EJOUR
T1 - A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS
AU - Taewoong Lee; Jesper Kjær Nielsen; Jesper Rindom Jensen; and Mads Græsbøll Christensen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2925
ER -
Taewoong Lee, Jesper Kjær Nielsen, Jesper Rindom Jensen, and Mads Græsbøll Christensen. (2018). A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS. IEEE SigPort. http://sigport.org/2925
Taewoong Lee, Jesper Kjær Nielsen, Jesper Rindom Jensen, and Mads Græsbøll Christensen, 2018. A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS. Available at: http://sigport.org/2925.
Taewoong Lee, Jesper Kjær Nielsen, Jesper Rindom Jensen, and Mads Græsbøll Christensen. (2018). "A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS." Web.
1. Taewoong Lee, Jesper Kjær Nielsen, Jesper Rindom Jensen, and Mads Græsbøll Christensen. A UNIFIED APPROACH TO GENERATING SOUND ZONES USING VARIABLE SPAN LINEAR FILTERS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2925

Pages