BERT-like pre-training for symbolic piano music classification tasks

This article presents a benchmark study of symbolic piano music classification using the masked language modelling approach of the Bidirectional Encoder Representations from Transformers (BERT). Specifically, we consider two types of MIDI data: MIDI scores, which are musical scores rendered directly...

Full description

Saved in:
Bibliographic Details
Published in:Journal of creative music systems Vol. 8; no. 1; pp. 1 - 19
Main Authors: Yi-Hui Chou, I-Chun Chen, Chin-Jui Chang, Joann Ching, Yi-Hsuan Yang
Format: Journal Article
Language:English
Published: Huddersfield, United Kingdom University of Huddersfield Press 01-10-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article presents a benchmark study of symbolic piano music classification using the masked language modelling approach of the Bidirectional Encoder Representations from Transformers (BERT). Specifically, we consider two types of MIDI data: MIDI scores, which are musical scores rendered directly into MIDI with no dynamics and precisely aligned with the metrical grids notated by their composers and MIDI performances, which are MIDI encodings of human performances of musical scoresheets. With five public-domain datasets of single-track piano MIDI files, we pre-train two 12-layer Transformer models using the BERT approach, one for MIDI scores and the other for MIDI performances, and fine-tune them for four downstream classification tasks. These include two note-level classification tasks (melody extraction and velocity prediction) and two sequence-level classification tasks (style classification and emotion classification). Our evaluation shows that the BERT approach leads to higher classification accuracy than recurrent neural network (RNN)-based baselines.
Bibliography:Journal of Creative Music Systems, Vol. 8, No. 1, Oct 2024, 1-19
Informit, Melbourne (Vic)
ISSN:2399-7656
2399-7656
DOI:10.5920/jcms.1064