Deep Learning for Audio Event Detection and Tagging on Low-Resource Datasets

In training a deep learning system to perform audio transcription, two practical problems may arise. Firstly, most datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, deep neural networks need a very large amou...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences Vol. 8; no. 8; p. 1397
Main Authors: Morfi, Veronica, Stowell, Dan
Format: Journal Article
Language:English
Published: Basel MDPI AG 01-08-2018
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In training a deep learning system to perform audio transcription, two practical problems may arise. Firstly, most datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, deep neural networks need a very large amount of labelled training data to achieve good quality performance, yet in practice it is difficult to collect enough samples for most classes of interest. In this paper, we propose factorising the final task of audio transcription into multiple intermediate tasks in order to improve the training performance when dealing with this kind of low-resource datasets. We evaluate three data-efficient approaches of training a stacked convolutional and recurrent neural network for the intermediate tasks. Our results show that different methods of training have different advantages and disadvantages.
ISSN:2076-3417
2076-3417
DOI:10.3390/app8081397