Sorry, you need to enable JavaScript to visit this website.

Spatial and Multichannel Audio

Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)


With the strong growth of assistive and personal listening devices, natural sound rendering over headphones is becoming a necessity for prolonged listening in multimedia and virtual reality applications. The aim of natural sound rendering is to naturally recreate the sound scenes with the spatial and timbral quality as natural as possible, so as to achieve a truly immersive listening experience. However, rendering natural sound over headphones encounters many challenges. This tutorial article presents signal processing techniques to tackle these challenges to assist human listening.

Paper Details

Authors:
Kaushik Sunder, Ee-Leng Tan
Submitted On:
23 February 2016 - 1:43pm
Short Link:
Type:

Document Files

SPM15slides_Natural Sound Rendering for Headphones.pdf

(112)

Subscribe

[1] Kaushik Sunder, Ee-Leng Tan, "Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/167. Accessed: Aug. 14, 2020.
@article{167-15,
url = {http://sigport.org/167},
author = {Kaushik Sunder; Ee-Leng Tan },
publisher = {IEEE SigPort},
title = {Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)},
year = {2015} }
TY - EJOUR
T1 - Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)
AU - Kaushik Sunder; Ee-Leng Tan
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/167
ER -
Kaushik Sunder, Ee-Leng Tan. (2015). Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides). IEEE SigPort. http://sigport.org/167
Kaushik Sunder, Ee-Leng Tan, 2015. Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides). Available at: http://sigport.org/167.
Kaushik Sunder, Ee-Leng Tan. (2015). "Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides)." Web.
1. Kaushik Sunder, Ee-Leng Tan. Natural Sound Rendering for Headphones: Integration of signal processing techniques (slides) [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/167

Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition


This paper presents a novel 3DoF+ system that allows to navigate, i.e., change position, in scene-based spatial audio content beyond the sweet spot of a Higher Order Ambisonics recording. It is one of the first such systems based on sound capturing at a single spatial position. The system uses a parametric decomposition of the recorded sound field. For the synthesis, only coarse distance information about the sources is needed as side information but not the exact number of them.

Paper Details

Authors:
Andreas Behler, Peter Jax
Submitted On:
20 May 2020 - 10:32am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

handout.pdf

(31)

Subscribe

[1] Andreas Behler, Peter Jax, "Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5414. Accessed: Aug. 14, 2020.
@article{5414-20,
url = {http://sigport.org/5414},
author = {Andreas Behler; Peter Jax },
publisher = {IEEE SigPort},
title = {Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition},
year = {2020} }
TY - EJOUR
T1 - Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition
AU - Andreas Behler; Peter Jax
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5414
ER -
Andreas Behler, Peter Jax. (2020). Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition. IEEE SigPort. http://sigport.org/5414
Andreas Behler, Peter Jax, 2020. Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition. Available at: http://sigport.org/5414.
Andreas Behler, Peter Jax. (2020). "Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition." Web.
1. Andreas Behler, Peter Jax. Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5414

Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording


A sensor (microphone) placement method based on mutual information for spatial sound field recording is proposed. The sound field recording methods using distributed sensors enable the estimation of the sound field inside a target region of arbitrary shape; however, it is a difficult task to find the best placement of sensors. We focus on the mutual-information-based sensor placement method in which spatial phenomena are modeled as a Gaussian process (GP).

Paper Details

Authors:
Kentaro Ariga, Tomoya Nishida, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari
Submitted On:
14 May 2020 - 6:22am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_Nishida.pdf

(31)

Subscribe

[1] Kentaro Ariga, Tomoya Nishida, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari, "Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5278. Accessed: Aug. 14, 2020.
@article{5278-20,
url = {http://sigport.org/5278},
author = {Kentaro Ariga; Tomoya Nishida; Shoichi Koyama; Natsuki Ueno; Hiroshi Saruwatari },
publisher = {IEEE SigPort},
title = {Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording},
year = {2020} }
TY - EJOUR
T1 - Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording
AU - Kentaro Ariga; Tomoya Nishida; Shoichi Koyama; Natsuki Ueno; Hiroshi Saruwatari
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5278
ER -
Kentaro Ariga, Tomoya Nishida, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari. (2020). Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording. IEEE SigPort. http://sigport.org/5278
Kentaro Ariga, Tomoya Nishida, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari, 2020. Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording. Available at: http://sigport.org/5278.
Kentaro Ariga, Tomoya Nishida, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari. (2020). "Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording." Web.
1. Kentaro Ariga, Tomoya Nishida, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari. Mutual-Information-Based Sensor Placement for Spatial Sound Field Recording [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5278

Active noise control over multiple regions: performance analysis


Active noise control (ANC) over space is a well-researched topic where multi-microphone, multi-loudspeaker systems are designed to minimize the noise over a spatial region of interest. In this paper, we perform an initial study on the more complex problem of simultaneous noise control over multiple target regions using a single ANC system. In particular, we investigate the maximum active noise control performance over the multiple target regions, given a particular setup of secondary loudspeakers.

Paper Details

Authors:
Jihui Zhang, Huiyuan Sun, Prasanga N. Samarasinghe, Thushara D. Abhayapala
Submitted On:
13 May 2020 - 8:25pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_presentation_final.pdf

(29)

Subscribe

[1] Jihui Zhang, Huiyuan Sun, Prasanga N. Samarasinghe, Thushara D. Abhayapala, "Active noise control over multiple regions: performance analysis", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5170. Accessed: Aug. 14, 2020.
@article{5170-20,
url = {http://sigport.org/5170},
author = {Jihui Zhang; Huiyuan Sun; Prasanga N. Samarasinghe; Thushara D. Abhayapala },
publisher = {IEEE SigPort},
title = {Active noise control over multiple regions: performance analysis},
year = {2020} }
TY - EJOUR
T1 - Active noise control over multiple regions: performance analysis
AU - Jihui Zhang; Huiyuan Sun; Prasanga N. Samarasinghe; Thushara D. Abhayapala
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5170
ER -
Jihui Zhang, Huiyuan Sun, Prasanga N. Samarasinghe, Thushara D. Abhayapala. (2020). Active noise control over multiple regions: performance analysis. IEEE SigPort. http://sigport.org/5170
Jihui Zhang, Huiyuan Sun, Prasanga N. Samarasinghe, Thushara D. Abhayapala, 2020. Active noise control over multiple regions: performance analysis. Available at: http://sigport.org/5170.
Jihui Zhang, Huiyuan Sun, Prasanga N. Samarasinghe, Thushara D. Abhayapala. (2020). "Active noise control over multiple regions: performance analysis." Web.
1. Jihui Zhang, Huiyuan Sun, Prasanga N. Samarasinghe, Thushara D. Abhayapala. Active noise control over multiple regions: performance analysis [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5170

RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES


In this paper, we present an end-to-end deep convolutional neural network operating on multi-channel raw audio data to localize multiple simultaneously active acoustic sources in space. Previously reported end-to-end deep learning based approaches work well in localizing a single source directly from multi-channel raw-audio, but are not easily extendable to localize multiple sources due to the well known permutation problem.

Paper Details

Authors:
Harshavardhan Sundar, Weiran Wang, Ming Sun, Chao Wang
Submitted On:
3 May 2020 - 3:51pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Raw Waveform based MSL

(30)

Subscribe

[1] Harshavardhan Sundar, Weiran Wang, Ming Sun, Chao Wang, "RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5118. Accessed: Aug. 14, 2020.
@article{5118-20,
url = {http://sigport.org/5118},
author = {Harshavardhan Sundar; Weiran Wang; Ming Sun; Chao Wang },
publisher = {IEEE SigPort},
title = {RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES},
year = {2020} }
TY - EJOUR
T1 - RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES
AU - Harshavardhan Sundar; Weiran Wang; Ming Sun; Chao Wang
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5118
ER -
Harshavardhan Sundar, Weiran Wang, Ming Sun, Chao Wang. (2020). RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES. IEEE SigPort. http://sigport.org/5118
Harshavardhan Sundar, Weiran Wang, Ming Sun, Chao Wang, 2020. RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES. Available at: http://sigport.org/5118.
Harshavardhan Sundar, Weiran Wang, Ming Sun, Chao Wang. (2020). "RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES." Web.
1. Harshavardhan Sundar, Weiran Wang, Ming Sun, Chao Wang. RAW WAVEFORM BASED END-TO-END DEEP CONVOLUTIONAL NETWORK FOR SPATIAL LOCALIZATION OF MULTIPLE ACOUSTIC SOURCES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5118

Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array


We propose an analytical method of 2.5-dimensional exterior sound field reproduction by using a multipole loudspeaker array. The method reproduces the sound field modeled by expansion coefficients of spherical harmonics based on multipole superposition. We also present an analytical method for converting the expansion coefficients of spherical harmonics to weighting coefficients for multipole superposition.

Paper Details

Authors:
Kenta Imaizumi, Kimitaka Tsuitsuimi, Atsushi Nakadaira, Yoichi Haneda
Submitted On:
18 October 2019 - 1:47am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

WASPAA2019_Poster_Imaizumi.pdf

(97)

Subscribe

[1] Kenta Imaizumi, Kimitaka Tsuitsuimi, Atsushi Nakadaira, Yoichi Haneda, "Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4880. Accessed: Aug. 14, 2020.
@article{4880-19,
url = {http://sigport.org/4880},
author = {Kenta Imaizumi; Kimitaka Tsuitsuimi; Atsushi Nakadaira; Yoichi Haneda },
publisher = {IEEE SigPort},
title = {Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array},
year = {2019} }
TY - EJOUR
T1 - Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array
AU - Kenta Imaizumi; Kimitaka Tsuitsuimi; Atsushi Nakadaira; Yoichi Haneda
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4880
ER -
Kenta Imaizumi, Kimitaka Tsuitsuimi, Atsushi Nakadaira, Yoichi Haneda. (2019). Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array. IEEE SigPort. http://sigport.org/4880
Kenta Imaizumi, Kimitaka Tsuitsuimi, Atsushi Nakadaira, Yoichi Haneda, 2019. Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array. Available at: http://sigport.org/4880.
Kenta Imaizumi, Kimitaka Tsuitsuimi, Atsushi Nakadaira, Yoichi Haneda. (2019). "Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array." Web.
1. Kenta Imaizumi, Kimitaka Tsuitsuimi, Atsushi Nakadaira, Yoichi Haneda. Analytical Method of 2.5D Exterior Sound Field Synthesis by Using Multipole Loudspeaker Array [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4880

3D localized sound zone generation with a planar omni-directional loudspeaker array


This paper provides a 3D localized sound zone generation method using a planar omni-directional loudspeaker array. In the proposed method, multiple co-centered circular arrays are arranged on the horizontal plane and an additional loudspeaker is located at the array’s center. The sound field produced by this center loudspeaker is then cancelled using the multiple circular arrays. A localized 3D sound zone can thus be generated inside a sphere with a maximum radius of that of the circular arrays because the residual sound field is contained within the sphere.

Paper Details

Authors:
Submitted On:
17 October 2019 - 1:25am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

WASPAA_2019_okamoto.pdf

(110)

Subscribe

[1] , "3D localized sound zone generation with a planar omni-directional loudspeaker array", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4877. Accessed: Aug. 14, 2020.
@article{4877-19,
url = {http://sigport.org/4877},
author = { },
publisher = {IEEE SigPort},
title = {3D localized sound zone generation with a planar omni-directional loudspeaker array},
year = {2019} }
TY - EJOUR
T1 - 3D localized sound zone generation with a planar omni-directional loudspeaker array
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4877
ER -
. (2019). 3D localized sound zone generation with a planar omni-directional loudspeaker array. IEEE SigPort. http://sigport.org/4877
, 2019. 3D localized sound zone generation with a planar omni-directional loudspeaker array. Available at: http://sigport.org/4877.
. (2019). "3D localized sound zone generation with a planar omni-directional loudspeaker array." Web.
1. . 3D localized sound zone generation with a planar omni-directional loudspeaker array [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4877

MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION


The use of spatial information with multiple microphones can improve far-field automatic speech recognition (ASR) accuracy. However, conventional microphone array techniques degrade speech enhancement performance when there is an array geometry mismatch between design and test conditions. Moreover, such speech enhancement techniques do not always yield ASR accuracy improvement due to the difference between speech enhancement and ASR optimization objectives.

Paper Details

Authors:
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister
Submitted On:
10 May 2019 - 6:38pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster file

(105)

manuscript file

(101)

Subscribe

[1] Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister, "MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4420. Accessed: Aug. 14, 2020.
@article{4420-19,
url = {http://sigport.org/4420},
author = {Shiva Sundaram; Nikko Strom; Bjorn Hoffmeister },
publisher = {IEEE SigPort},
title = {MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION},
year = {2019} }
TY - EJOUR
T1 - MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION
AU - Shiva Sundaram; Nikko Strom; Bjorn Hoffmeister
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4420
ER -
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister. (2019). MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/4420
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister, 2019. MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION. Available at: http://sigport.org/4420.
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister. (2019). "MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION." Web.
1. Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister. MULTI-GEOMETRY SPATIAL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4420

FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION


Conventional far-field automatic speech recognition (ASR) systems typically employ microphone array techniques for speech enhancement in order to improve robustness against noise or reverberation. However, such speech enhancement techniques do not always yield ASR accuracy improvement because the optimization criterion for speech enhancement is not directly relevant to the ASR objective. In this work, we develop new acoustic modeling techniques that optimize spatial filtering and long short-term memory (LSTM) layers from multi-channel (MC) input based on an ASR criterion directly.

Paper Details

Authors:
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister
Submitted On:
10 May 2019 - 6:36pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster file

(103)

manuscript file

(101)

Subscribe

[1] Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister, "FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4419. Accessed: Aug. 14, 2020.
@article{4419-19,
url = {http://sigport.org/4419},
author = {Shiva Sundaram; Nikko Strom; Bjorn Hoffmeister },
publisher = {IEEE SigPort},
title = {FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION},
year = {2019} }
TY - EJOUR
T1 - FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION
AU - Shiva Sundaram; Nikko Strom; Bjorn Hoffmeister
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4419
ER -
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister. (2019). FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/4419
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister, 2019. FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION. Available at: http://sigport.org/4419.
Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister. (2019). "FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION." Web.
1. Shiva Sundaram, Nikko Strom, Bjorn Hoffmeister. FREQUENCY DOMAIN MULTI-CHANNEL ACOUSTIC MODELING FOR DISTANT SPEECH RECOGNITION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4419

Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays


Although 2.5D sound field synthesis with a circular loudspeaker array can be used in a 3D sound field, a 2D sound field, instead of a 3D sound field, is assumed for a sound field recording with a circular microphone array. This paper presents a horizontal 3D sound field recording and 2.5D synthesis method used in 3D sound fields with multiple co-centered omni-directional circular microphone arrays and a circular loudspeaker array without vertical derivative measurements.

Paper Details

Authors:
Takuma Okamoto
Submitted On:
10 May 2019 - 3:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp_2019_okamoto_2.pdf

(114)

Subscribe

[1] Takuma Okamoto, "Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4279. Accessed: Aug. 14, 2020.
@article{4279-19,
url = {http://sigport.org/4279},
author = {Takuma Okamoto },
publisher = {IEEE SigPort},
title = {Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays},
year = {2019} }
TY - EJOUR
T1 - Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays
AU - Takuma Okamoto
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4279
ER -
Takuma Okamoto. (2019). Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays. IEEE SigPort. http://sigport.org/4279
Takuma Okamoto, 2019. Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays. Available at: http://sigport.org/4279.
Takuma Okamoto. (2019). "Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays." Web.
1. Takuma Okamoto. Horizontal 3D sound field recording and 2.5D synthesis with omni-directional circular arrays [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4279

Pages