VT-MCNet: High-Accuracy Automatic Modulation Classification Model based on Vision Transformer
Cognitive radio networks' evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals limitations in attaining high AMC accuracy due to ineffective feature extraction from signals. To counter this, we propose a vision-centric approa...
Saved in:
Published in: | IEEE communications letters Vol. 28; no. 1; p. 1 |
---|---|
Main Authors: | , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
New York
IEEE
01-01-2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | Cognitive radio networks' evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals limitations in attaining high AMC accuracy due to ineffective feature extraction from signals. To counter this, we propose a vision-centric approach employing diverse kernel sizes to augment signal extraction. In addition, we refine the transformer architecture by incorporating a dual-branch multi-layer perceptron network, enabling diverse pattern learning and enhancing the model's running speed. Specifically, our architecture allows the system to focus on relevant portions of the input sequence, thus, it improves classification accuracy for both high and low signal-to-noise regimes. By utilizing the widely recognized DeepSig dataset, our pioneering deep model, termed as VT-MCNet, outshines prior leading-edge deep networks in terms of classification accuracy and computational costs. Notably, VT-MCNet reaches an exceptional cumulative classification rate of up to 99.24%, while the state-of-the-art method, even with higher computational complexity, can only achieve 99.06%. |
---|---|
AbstractList | Cognitive radio networks’ evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals limitations in attaining high AMC accuracy due to ineffective feature extraction from signals. To counter this, we propose a vision-centric approach employing diverse kernel sizes to augment signal extraction. In addition, we refine the transformer architecture by incorporating a dual-branch multi-layer perceptron network, enabling diverse pattern learning and enhancing the model’s running speed. Specifically, our architecture allows the system to focus on relevant portions of the input sequence, thus, it improves classification accuracy for both high and low signal-to-noise regimes. By utilizing the widely recognized DeepSig dataset, our pioneering deep model, termed as VT-MCNet, outshines prior leading-edge deep networks in terms of classification accuracy and computational costs. Notably, VT-MCNet reaches an exceptional cumulative classification rate of up to 99.24%, while the state-of-the-art method, even with higher computational complexity, can only achieve 99.06%. |
Author | Pham, Quoc-Viet Sekiya, Hiroo Hwang, Won-Joo Hasegawa, Mikio Dao, Thien-Thanh Noh, Dae-Il |
Author_xml | – sequence: 1 givenname: Thien-Thanh orcidid: 0000-0003-1952-7067 surname: Dao fullname: Dao, Thien-Thanh organization: Department of Information Convergence Engineering, Pusan National University, Busan, South Korea – sequence: 2 givenname: Dae-Il orcidid: 0000-0002-6586-5780 surname: Noh fullname: Noh, Dae-Il organization: Department of Information Convergence Engineering, Center for Artificial Intelligence Research, Pusan National University, Busan, South Korea – sequence: 3 givenname: Mikio orcidid: 0000-0001-5638-8022 surname: Hasegawa fullname: Hasegawa, Mikio organization: Department of Electrical Engineering, Tokyo University of Science, Tokyo, Japan – sequence: 4 givenname: Hiroo orcidid: 0000-0003-3557-1463 surname: Sekiya fullname: Sekiya, Hiroo organization: Graduate School of Engineering, Chiba University, Chiba, Japan – sequence: 5 givenname: Quoc-Viet orcidid: 0000-0002-9485-9216 surname: Pham fullname: Pham, Quoc-Viet organization: School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland – sequence: 6 givenname: Won-Joo orcidid: 0000-0001-8398-564X surname: Hwang fullname: Hwang, Won-Joo organization: Department of Information Convergence Engineering, Center for Artificial Intelligence Research, Pusan National University, Busan, South Korea |
BookMark | eNpNUE1PwzAMjdCQ2AZ_AHGoxLkjH-2acJsqxpBWdhm7oShNHejUNSNpD_v3ZHQHJMt-tt-zpTdBo9a2gNA9wTNCsHha55uimFFM2YwxNhc8vUJjkqY8piGNAsZcxFkm-A2aeL_HGHOakjH63G3jIn-H7jla1V_f8ULr3il9ihZ9Zw-qq3VU2KpvArJtlDfK-9rUemjDBpqoVB6qKLS72p-nW6dab6w7gLtF10Y1Hu4udYo-li_bfBWvN69v-WIda5pkXVxqDVSx1GBWwpyFlJhEi4pXpMo0SThPweDK8FSUikKWEA3zUvHElBjTrGRT9DjcPTr704Pv5N72rg0vJRWECsZCBBYdWNpZ7x0YeXT1QbmTJFiebZR_NsqzjfJiYxA9DKIaAP4JGOU8E-wXnhZxZg |
CODEN | ICLEF6 |
Cites_doi | 10.1109/TCCN.2018.2835460 10.1109/JSTSP.2018.2797022 10.1109/LWC.2020.2999453 10.1109/LCOMM.2023.3261423 10.1109/TCCN.2022.3176640 10.1016/j.aej.2022.08.019 10.1109/LCOMM.2022.3213523 10.1063/1.4902458 10.1109/TVT.2020.3030018 10.1109/LWC.2022.3162422 10.1109/LCOMM.2020.2968030 10.1109/TVT.2020.3042638 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
DBID | 97E RIA RIE AAYXX CITATION 7SP 8FD L7M |
DOI | 10.1109/LCOMM.2023.3336985 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library Online CrossRef Electronics & Communications Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
DatabaseTitle | CrossRef Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library Online url: http://ieeexplore.ieee.org/Xplore/DynWel.jsp sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2558 |
EndPage | 1 |
ExternalDocumentID | 10_1109_LCOMM_2023_3336985 10328879 |
Genre | orig-research |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AASAJ ABQJQ ABVLG ACGFO ACIWK AENEX AKJIK ALMA_UNASSIGNED_HOLDINGS ATWAV AZLTO BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IES IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RIG RNS TN5 5VS AAYOK AAYXX AETIX AI. AIBXA ALLEH CITATION EJD H~9 IFJZH VH1 7SP 8FD L7M |
ID | FETCH-LOGICAL-c247t-bcce2a35f03be633be4f4c9d8d1d7c14885ef0df859ba2e741ce6ba84fb0027b3 |
IEDL.DBID | RIE |
ISSN | 1089-7798 |
IngestDate | Wed Oct 16 12:53:39 EDT 2024 Fri Aug 23 00:25:17 EDT 2024 Wed Jun 26 19:24:20 EDT 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c247t-bcce2a35f03be633be4f4c9d8d1d7c14885ef0df859ba2e741ce6ba84fb0027b3 |
ORCID | 0000-0002-6586-5780 0000-0002-9485-9216 0000-0003-1952-7067 0000-0003-3557-1463 0000-0001-5638-8022 0000-0001-8398-564X |
PQID | 2912933933 |
PQPubID | 85419 |
PageCount | 1 |
ParticipantIDs | crossref_primary_10_1109_LCOMM_2023_3336985 ieee_primary_10328879 proquest_journals_2912933933 |
PublicationCentury | 2000 |
PublicationDate | 2024-01-01 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – month: 01 year: 2024 text: 2024-01-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE communications letters |
PublicationTitleAbbrev | LCOMM |
PublicationYear | 2024 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref14 ref11 ref10 ref2 ref1 Dosovitskiy (ref9) ref7 ref4 ref3 ref6 Vaswani (ref8) ref5 |
References_xml | – start-page: 1 volume-title: Proc. ICLR ident: ref9 article-title: An image is worth 16 × 16 words: Transformers for image recognition at scale contributor: fullname: Dosovitskiy – ident: ref2 doi: 10.1109/TCCN.2018.2835460 – start-page: 6000 volume-title: Proc. NIPS ident: ref8 article-title: Attention is all you need contributor: fullname: Vaswani – ident: ref1 doi: 10.1109/JSTSP.2018.2797022 – ident: ref14 doi: 10.1109/LWC.2020.2999453 – ident: ref7 doi: 10.1109/LCOMM.2023.3261423 – ident: ref11 doi: 10.1109/TCCN.2022.3176640 – ident: ref3 doi: 10.1016/j.aej.2022.08.019 – ident: ref10 doi: 10.1109/LCOMM.2022.3213523 – ident: ref12 doi: 10.1063/1.4902458 – ident: ref4 doi: 10.1109/TVT.2020.3030018 – ident: ref6 doi: 10.1109/LWC.2022.3162422 – ident: ref5 doi: 10.1109/LCOMM.2020.2968030 – ident: ref13 doi: 10.1109/TVT.2020.3042638 |
SSID | ssj0008251 |
Score | 2.4736164 |
Snippet | Cognitive radio networks' evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals... Cognitive radio networks’ evolution hinges significantly on the use of automatic modulation classification (AMC). However, existing research reveals... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Publisher |
StartPage | 1 |
SubjectTerms | Accuracy Classification Cognitive radio Computer architecture Computing costs Convolution convolutional neural network Feature extraction Kernel Modulation Modulation classification Multilayer perceptrons Multilayers Tensors Transformers vision transformers wireless communications |
Title | VT-MCNet: High-Accuracy Automatic Modulation Classification Model based on Vision Transformer |
URI | https://ieeexplore.ieee.org/document/10328879 https://www.proquest.com/docview/2912933933 |
Volume | 28 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELZoJxh4FlEoyAMbcpvWTmKzVaVVB1IGSsWCovg1oQa1ycC_x-ckVSXEgBRFsZRE1l1y9_l8dx9C99ZQGzItSEB1RJjUQ8JDGxHKmHNOig6lgTjk_DVevPOnKbTJIbtaGGOMTz4zfbj0e_k6VyWEyga--RuPRQu1YsGrYq2d2YUazCqbXjjIKHhTIROIwfPkJUn6QBTep5RGAoiT97yQp1X5ZYu9g5md_HNqp-i4RpJ4XKn-DB2Y9Tk62usveIE-VkuSTBameMSQz0HGSpWbTH3jcVnkvlcrTnJdE3hhz48JmUPVEFjSPjF4OY3dcOWL0PGyAbpm00Fvs-lyMic1nwJRIxYXRCpg_6KhDag0EXUnZpkSmuuhjpVbF_HQ2EBbHgqZjYzDGspEMuPMgiuPJb1E7XW-NlcIsyzOHDYSMZVDpv3moHRmVSkFmFKpLnpo5Jt-VW0zUr_cCETqtZGCNtJaG13UAYnu3VkJs4t6jU7S-tfapiMBEIW64_qPx27QoXs7qwIlPdQuNqW5Ra2tLu_8J_MDtUi-fQ |
link.rule.ids | 315,782,786,798,27933,27934,54767 |
linkProvider | IEEE |
linkToHtml | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED7xGICBZxGFAh7YkEtaO3HMVpVWRTRloFQsKIofmVCLSjPw7_E5aVUJMSBFUSwlinWX3H0-390HcJNblofcSBowE1GuTIvGYR5RxrlzTpq1lMU45OBFjN7ihx62yaGrWhhrrU8-s0289Hv5ZqYLDJXd-eZvsZCbsB1yEYmyXGtleLEKs8ynlw40ynhZIxPIu2H3OUmaSBXeZIxFEqmT1_yQJ1b5ZY29i-kf_HNyh7BfYUnSKZV_BBt2egx7ax0GT-B9MqZJd2QX9wQzOmhH62Ke6W_SKRYz362VJDNTUXgRz5CJuUPlEHnSPgj6OUPccOLL0Ml4CXXtvAav_d64O6AVowLVbS4WVGnk_2JhHjBlI-ZOPOdamti0jNBuZRSHNg9MHodSZW3r0Ia2kcpinqMzF4qdwtZ0NrVnQHgmMoeOpGCqxY3fHlTOsGqtEVVqXYfbpXzTz7JxRuoXHIFMvTZS1EZaaaMONZTo2p2lMOvQWOokrX6ur7QtEaQwd5z_8dg17AzGyTAdPo6eLmDXvYmXYZMGbC3mhb2EzS9TXPnP5wfoK8HO |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=VT-MCNet%3A+High-Accuracy+Automatic+Modulation+Classification+Model+based+on+Vision+Transformer&rft.jtitle=IEEE+communications+letters&rft.au=Dao%2C+Thien-Thanh&rft.au=Noh%2C+Dae-Il&rft.au=Hasegawa%2C+Mikio&rft.au=Sekiya%2C+Hiroo&rft.date=2024-01-01&rft.pub=IEEE&rft.issn=1089-7798&rft.eissn=1558-2558&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FLCOMM.2023.3336985&rft.externalDocID=10328879 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1089-7798&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1089-7798&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1089-7798&client=summon |