Sorry, you need to enable JavaScript to visit this website.

Framework For Evaluation Of Sound Event Detection In Web Videos

Citation Author(s):
Rohan Badlani, Ankit Shah, Benjamin Elizalde, Anurag Kumar, Bhiksha Raj
Submitted by:
Rohan Badlani
Last updated:
20 April 2018 - 9:45am
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Rohan Badlani & Ankit Shah
Paper Code:
1260
 

The largest source of sound events is web videos. Most videos lack sound event labels at segment level, however, a significant number of them do respond to text queries, from a match found using metadata by search engines. In this paper we explore the extent to which a search query can be used as the true label for detection of sound events in videos. We present a framework for large-scale sound event recognition on web videos. The framework crawls videos using search queries corresponding to 78 sound event labels drawn from three datasets. The datasets are used to train three classifiers, and we obtain a prediction on 3.7 million web video segments. We evaluated performance using the search query as true label and compare it with human labeling. Both types of ground truth exhibited close performance, to within 10%, and similar performance trend with increasing number of evaluated segments. Hence, our experiments show potential for using search query as a preliminary true label for sound event recognition in web videos.

up
1 user has voted: Ankit Shah