Sorry, you need to enable JavaScript to visit this website.

A Sequence Matching Network for Polyphonic Sound Event Localization and Detection

Citation Author(s):
T. N. T. Nguyen, D. L. Jones, W. S. Gan
Submitted by:
Tho Nguyen
Last updated:
23 April 2020 - 4:54am
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters:
Tho Nguyen
Paper Code:
AUD-L3
 

Polyphonic sound event detection and direction-of-arrival estimation require different input features from audio signals. While sound event detection mainly relies on time-frequency patterns, direction-of-arrival estimation relies on magnitude or phase differences between microphones. Previous approaches use the same input features for sound event detection and direction-of-arrival estimation, and train the two tasks jointly or in a two-stage transfer-learning manner. We propose a two-step approach that decouples the learning of the sound event detection and directional-of-arrival estimation systems. In the first step, we detect the sound events and estimate the directions-of-arrival separately to optimize the performance of each system. In the second step, we train a deep neural network to match the two output sequences of the event detector and the direction-of-arrival estimator. This modular and hierarchical approach allows the flexibility in the system design, and increase the performance of the whole sound event localization and detection system. The experimental results using the DCASE 2019 sound event localization and detection dataset show an improved performance compared to the previous state-of-the-art solutions.

up
0 users have voted: