Automatic detection and taxonomic identification of dolphin vocalisations using convolutional neural networks for passive acoustic monitoring
A novel framework for acoustic detection and species identification is proposed to aid passive acoustic monitoring studies on the endangered Indian Ocean humpback dolphin (Sousa plumbea) in South African waters. Convolutional Neural Networks (CNNs) were used for both detection and identification of...
Saved in:
Published in: | Ecological informatics Vol. 78; p. 102291 |
---|---|
Main Authors: | , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
01-12-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | A novel framework for acoustic detection and species identification is proposed to aid passive acoustic monitoring studies on the endangered Indian Ocean humpback dolphin (Sousa plumbea) in South African waters. Convolutional Neural Networks (CNNs) were used for both detection and identification of dolphin vocalisations tasks, and performance was evaluated using custom and pre-trained architectures (transfer learning). In total, 723 min of acoustic data were annotated for the presence of whistles, burst pulses and echolocation clicks produced by Delphinus delphis (~45.6%), Tursiops aduncus (~39%), Sousa plumbea (~14.4%), Orcinus orca (~1%). The best performing models for detecting dolphin presence and species identification used segments (spectral windows) of two second lengths and were trained using images with 70 and 90 dpi, respectively. The best detection model was built using a customised architecture and achieved an accuracy of 84.4% for all dolphin vocalisations on the test set, and 89.5% for vocalisations with a high signal to noise ratio. The best identification model was also built using the customised architecture and correctly identified S. plumbea (96.9%), T. aduncus (100%), and D. delphis (78%) encounters in the testing dataset. The developed framework was designed based on the knowledge of complex dolphin sounds and it may assists in finding suitable CNN hyper-parameters for other species or populations. Our study contributes towards the development of an open-source tool to assist long-term studies of endangered species, living in highly diverse habitats, using passive acoustic monitoring.
[Display omitted]
•Novel framework to detect and identify dolphin sounds in long-term recordings.•Detection model achieved 89.5% accuracy for sounds used in ecological studies.•Audio segments length and images dpi were important for both models' accuracy.•Best multi-class species classifier model achieved 96.9% accuracy for S. plumbea. |
---|---|
ISSN: | 1574-9541 |
DOI: | 10.1016/j.ecoinf.2023.102291 |