Documents
Presentation Slides
WHAT MAKES THE SOUND?: A DUAL-MODALITY INTERACTING NETWORK FOR AUDIO-VISUAL EVENT LOCALIZATION
- Citation Author(s):
- Submitted by:
- Janani Ramaswamy
- Last updated:
- 16 May 2020 - 1:09am
- Document Type:
- Presentation Slides
- Document Year:
- 2020
- Event:
- Presenters:
- Janani Ramaswamy
- Paper Code:
- 5164
- Categories:
- Log in to post comments
The presence of auditory and visual senses enables humans to obtain a profound understanding of the real-world scenes. While audio and visual signals are capable of providing scene knowledge individually, the combination of both offers a better insight about the underlying event. In this paper, we address the problem of audio-visual event localization where the goal is to identify the presence of an event that is both audible and visible in a video, using fully or weakly supervised learning. For this, we propose a novel Audio-Visual Interacting Network (AVIN) that enables inter as well as intra modality interactions by exploiting the local and global information of the two modalities. Our empirical evaluations confirm the superiority of our proposed model over the existing state-of-the art methods, in both fully as well as weakly supervised learning tasks, thus asserting the efficacy of our joint-modeling.