Music Genre Classification using Large Language Models
This paper exploits the zero-shot capabilities of pre-trained large language models (LLMs) for music genre classification. The proposed approach splits audio signals into 20 ms chunks and processes them through convolutional feature encoders, a transformer encoder, and additional layers for coding a...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
10-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper exploits the zero-shot capabilities of pre-trained large language
models (LLMs) for music genre classification. The proposed approach splits
audio signals into 20 ms chunks and processes them through convolutional
feature encoders, a transformer encoder, and additional layers for coding audio
units and generating feature vectors. The extracted feature vectors are used to
train a classification head. During inference, predictions on individual chunks
are aggregated for a final genre classification. We conducted a comprehensive
comparison of LLMs, including WavLM, HuBERT, and wav2vec 2.0, with traditional
deep learning architectures like 1D and 2D convolutional neural networks (CNNs)
and the audio spectrogram transformer (AST). Our findings demonstrate the
superior performance of the AST model, achieving an overall accuracy of 85.5%,
surpassing all other models evaluated. These results highlight the potential of
LLMs and transformer-based architectures for advancing music information
retrieval tasks, even in zero-shot scenarios. |
---|---|
DOI: | 10.48550/arxiv.2410.08321 |