Sorry, you need to enable JavaScript to visit this website.

Unsupervised Acoustic Scene Mapping Based on Acoustic Features and Dimensionality Reduction

DOI:
10.60864/t2k6-5s08
Citation Author(s):
Idan Cohen, Sharon Gannot, Ofir Lindenbaum
Submitted by:
Idan Cohen
Last updated:
11 April 2024 - 9:06am
Document Type:
Presentation Slides
Document Year:
2024
Event:
Presenters:
Idan Cohen
Paper Code:
AASP-L1.4
 

Classical methods for acoustic scene mapping require the estimation of the time difference of arrival (TDOA) between microphones. Unfortunately, TDOA estimation is very sensitive to reverberation and additive noise. We introduce an unsupervised data-driven approach that exploits the natural structure of the data. Toward this goal, we adapt the recently proposed local conformal autoencoder (LOCA) – an offline deep learning scheme for extracting standardized data coordinates from measurements. Our experimental setup includes a microphone array that measures the transmitted sound source, whose position is unknown, at multiple locations across the acoustic enclosure. We demonstrate that our proposed scheme learns an isometric representation of the microphones’ spatial locations and can perform extrapolation over new and unvisited regions. The performance of our method is evaluated using a series of realistic simulations and compared with a classical approach and other dimensionality reduction schemes. We further assess reverberation’s influence on our framework’s results and show that it demonstrates considerable robustness.

up
0 users have voted: