VT-MCNet: High-Accuracy Automatic Modulation Classification Model based on Vision Transformer

Cognitive radio networks' evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals limitations in attaining high AMC accuracy due to ineffective feature extraction from signals. To counter this, we propose a vision-centric approa...

Full description

Saved in:
Bibliographic Details
Published in:IEEE communications letters Vol. 28; no. 1; p. 1
Main Authors: Dao, Thien-Thanh, Noh, Dae-Il, Hasegawa, Mikio, Sekiya, Hiroo, Pham, Quoc-Viet, Hwang, Won-Joo
Format: Journal Article
Language:English
Published: New York IEEE 01-01-2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Cognitive radio networks' evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals limitations in attaining high AMC accuracy due to ineffective feature extraction from signals. To counter this, we propose a vision-centric approach employing diverse kernel sizes to augment signal extraction. In addition, we refine the transformer architecture by incorporating a dual-branch multi-layer perceptron network, enabling diverse pattern learning and enhancing the model's running speed. Specifically, our architecture allows the system to focus on relevant portions of the input sequence, thus, it improves classification accuracy for both high and low signal-to-noise regimes. By utilizing the widely recognized DeepSig dataset, our pioneering deep model, termed as VT-MCNet, outshines prior leading-edge deep networks in terms of classification accuracy and computational costs. Notably, VT-MCNet reaches an exceptional cumulative classification rate of up to 99.24%, while the state-of-the-art method, even with higher computational complexity, can only achieve 99.06%.
ISSN:1089-7798
1558-2558
DOI:10.1109/LCOMM.2023.3336985