Sorry, you need to enable JavaScript to visit this website.

ATTENTIVE MAX FEATURE MAP AND JOINT TRAINING FOR ACOUSTIC SCENE CLASSIFICATION

Citation Author(s):
Hye-jin Shim, Jee-weon Jung, Ju-ho Kim, Ha-jin Yu
Submitted by:
Hye-jin Shim
Last updated:
13 May 2022 - 2:29am
Document Type:
Poster
Document Year:
2022
Event:
Presenters:
Hye-jin Shim
Paper Code:
AUD-36.1
 

Various attention mechanisms are being widely applied to acoustic scene classification. However, we empirically found that the attention mechanism can excessively discard potentially valuable information, despite improving performance. We propose the attentive max feature map that combines two effective techniques, attention and a max feature map, to further elaborate the attention mechanism and mitigate the above-mentioned phenomenon. We also explore various joint training methods, including multi-task learning, that allocate additional abstract labels for each audio recording. Our proposed system demonstrates competitive performance with much larger state-of-the-art systems for single systems on Subtask A of the DCASE 2020 challenge by applying the two proposed techniques using relatively fewer parameters. Furthermore, adopting the proposed attentive max feature map, our team placed fourth in the recent DCASE 2021 challenge.

up
0 users have voted: