Detecting semantic concepts in consumer videos using audio

With the increasing use of audio sensors in user generated content collection, how to detect semantic concepts using audio streams has become an important research problem. In this paper, we present a semantic concept annotation system using soundtracks/ audio of the video. We investigate three diff...

Full description

Saved in:
Bibliographic Details
Published in:2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 2279 - 2283
Main Authors: Junwei Liang, Qin Jin, Xixi He, Gang Yang, Jieping Xu, Xirong Li
Format: Conference Proceeding
Language:English
Published: IEEE 01-04-2015
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the increasing use of audio sensors in user generated content collection, how to detect semantic concepts using audio streams has become an important research problem. In this paper, we present a semantic concept annotation system using soundtracks/ audio of the video. We investigate three different acoustic feature representations for audio semantic concept annotation and explore fusion of audio annotation with visual annotation systems. We test our system on the data collection from HUAWEI Accurate and Fast Mobile Video Annotation Grand Challenge 2014. The experimental results show that our audio-only concept annotation system can detect semantic concepts significantly better than random guess. It can also provide significant complementary information to the visual-based concept annotation system for performance boost. Further detailed analysis shows that for interpreting a semantic concept both visually and acoustically, it is better to train concept models for the visual system and audio system using visual-driven and audio-driven ground truth separately.
ISSN:1520-6149
2379-190X
DOI:10.1109/ICASSP.2015.7178377