An Improved Deep Network-Based Scene Classification Method for Self-Driving Cars
A self-driving car is a hot research topic in the field of the intelligent transportation system, which can greatly alleviate traffic jams and improve travel efficiency. Scene classification is one of the key technologies of self-driving cars, which can provide the basis for decision-making in self-...
Saved in:
Published in: | IEEE transactions on instrumentation and measurement Vol. 71; pp. 1 - 14 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
New York
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | A self-driving car is a hot research topic in the field of the intelligent transportation system, which can greatly alleviate traffic jams and improve travel efficiency. Scene classification is one of the key technologies of self-driving cars, which can provide the basis for decision-making in self-driving cars. In recent years, deep learning-based solutions have achieved good results in the problem of scene classification. However, some problems should be further studied in the scene classification methods, such as how to deal with the similarities among different categories and the differences among the same category. To deal with these problems, an improved deep network-based scene classification method is proposed in this article. In the proposed method, an improved faster region with convolutional neural network features (RCNN) network is used to extract the features of representative objects in the scene to obtain local features, where a new residual attention block is added to the Faster RCNN network to highlight local semantics related to driving scenarios. In addition, an improved Inception module is used to extract global features, where a mixed Leaky ReLU and ELU function is presented, to reduce the possible redundancy of the convolution kernel and enhance the robustness. Then, the local features and the global features are fused to realize the scene classification. Finally, a private dataset is built from the public datasets for the specialized application of scene classification in the self-driving field, and the proposed method is tested on the proposed dataset. The experimental results show that the accuracy of the proposed method can reach 94.76%, which is higher than the state-of-the-art methods. |
---|---|
AbstractList | A self-driving car is a hot research topic in the field of the intelligent transportation system, which can greatly alleviate traffic jams and improve travel efficiency. Scene classification is one of the key technologies of self-driving cars, which can provide the basis for decision-making in self-driving cars. In recent years, deep learning-based solutions have achieved good results in the problem of scene classification. However, some problems should be further studied in the scene classification methods, such as how to deal with the similarities among different categories and the differences among the same category. To deal with these problems, an improved deep network-based scene classification method is proposed in this article. In the proposed method, an improved faster region with convolutional neural network features (RCNN) network is used to extract the features of representative objects in the scene to obtain local features, where a new residual attention block is added to the Faster RCNN network to highlight local semantics related to driving scenarios. In addition, an improved Inception module is used to extract global features, where a mixed Leaky ReLU and ELU function is presented, to reduce the possible redundancy of the convolution kernel and enhance the robustness. Then, the local features and the global features are fused to realize the scene classification. Finally, a private dataset is built from the public datasets for the specialized application of scene classification in the self-driving field, and the proposed method is tested on the proposed dataset. The experimental results show that the accuracy of the proposed method can reach 94.76%, which is higher than the state-of-the-art methods. |
Author | Yang, Simon X. Ni, Jianjun Shen, Kang Chen, Yinan Cao, Weidong |
Author_xml | – sequence: 1 givenname: Jianjun orcidid: 0000-0002-7130-8331 surname: Ni fullname: Ni, Jianjun email: njjhhuc@gmail.com organization: College of Internet of Things Engineering, Hohai University, Jiangsu, Changzhou, China – sequence: 2 givenname: Kang orcidid: 0000-0002-5575-1069 surname: Shen fullname: Shen, Kang email: shenkang_hhu@hhu.edu.cn organization: College of Internet of Things Engineering, Hohai University, Jiangsu, Changzhou, China – sequence: 3 givenname: Yinan orcidid: 0000-0003-2852-0078 surname: Chen fullname: Chen, Yinan email: chenyinan96@163.com organization: College of Internet of Things Engineering, Hohai University, Jiangsu, Changzhou, China – sequence: 4 givenname: Weidong orcidid: 0000-0002-0394-9639 surname: Cao fullname: Cao, Weidong email: cwd2018@hhu.edu.cn organization: College of Internet of Things Engineering, Hohai University, Jiangsu, Changzhou, China – sequence: 5 givenname: Simon X. orcidid: 0000-0002-6888-7993 surname: Yang fullname: Yang, Simon X. email: syang@uoguelph.ca organization: Advanced Robotics and Intelligent Systems (ARIS) Laboratory, School of Engineering, University of Guelph, Guelph, ON, Canada |
BookMark | eNo9kF1PwjAUhhuDiYDem3jTxOthe7p16yUOP0hATcDrpttOdQgrtgPjv3cE4tVJTp73PSfPgPQa1yAh15yNOGfqbjmdj4ABjASPpQJxRvo8SdJISQk90meMZ5GKE3lBBiGsGGOpjNM-eRs3dLrZerfHik4Qt_QF2x_nv6J7E7rVosQGab42IdS2Lk1bu4bOsf10FbXO0wWubTTx9b5uPmhufLgk59asA16d5pC8Pz4s8-do9vo0zcezqBRCtJEFw5S1KRjIsEiERQWxAhAorMVUpmDBSqlklZVcJiq1XJkYCywMswWrxJDcHnu73793GFq9cjvfdCc1SCFYDCKDjmJHqvQuBI9Wb329Mf5Xc6YP3nTnTR-86ZO3LnJzjNSI-I8r2clTsfgDMTxqTQ |
CODEN | IEIMAO |
CitedBy_id | crossref_primary_10_3390_s22155856 crossref_primary_10_1109_TIM_2023_3300474 crossref_primary_10_1109_TIM_2024_3351240 crossref_primary_10_1109_JSEN_2023_3304973 crossref_primary_10_3390_rs16010149 crossref_primary_10_3390_w14081300 crossref_primary_10_1109_TVT_2023_3267500 crossref_primary_10_1007_s11042_023_15845_5 crossref_primary_10_1007_s11042_024_18199_8 crossref_primary_10_1109_TIM_2022_3200361 crossref_primary_10_3389_fnbot_2023_1143032 crossref_primary_10_1109_TETCI_2023_3234548 crossref_primary_10_3390_rs16132465 crossref_primary_10_1007_s00500_023_09278_3 crossref_primary_10_1109_ACCESS_2024_3359435 crossref_primary_10_1007_s11042_023_17235_3 crossref_primary_10_1109_TIM_2023_3244819 crossref_primary_10_1109_TIM_2022_3200434 crossref_primary_10_1109_TIM_2023_3246534 crossref_primary_10_1007_s11042_023_18100_z crossref_primary_10_1007_s11042_024_18313_w crossref_primary_10_1109_TIM_2023_3260282 crossref_primary_10_1109_TIM_2023_3289563 crossref_primary_10_3390_app13158623 crossref_primary_10_1109_TIM_2022_3232093 crossref_primary_10_1007_s11042_023_17625_7 crossref_primary_10_3390_app13042712 |
Cites_doi | 10.1109/SGCF.2017.7947614 10.1049/iet-its.2018.5144 10.3390/app11125456 10.1109/CVPR.2015.7298594 10.1016/j.neucom.2021.08.104 10.3390/a14040114 10.1109/ICCV.2005.152 10.1007/s11263-019-01275-0 10.14257/ijsip.2015.8.2.28 10.1109/ACCESS.2019.2946000 10.1109/CVPR42600.2020.00889 10.1109/ICIIP47207.2019.8985925 10.1109/TIM.2019.2954722 10.1016/j.neucom.2016.11.023 10.1109/ACPEE51499.2021.9437086 10.1109/83.892448 10.1016/j.neucom.2017.09.098 10.1126/scirobotics.aav9843 10.1109/TIM.2021.3096284 10.3837/tiis.2019.04.003 10.1080/2150704X.2021.1966119 10.1155/2020/8817849 10.1109/TPAMI.2016.2577031 10.1109/TIP.2017.2675339 10.1016/j.tra.2018.05.005 10.3390/app10082749 10.1016/j.neucom.2019.10.116 10.1007/s11042-019-07933-2 10.1109/ACCESS.2021.3051085 10.1109/TIM.2016.2514780 10.1016/j.cviu.2019.102805 10.1117/1.JRS.14.044512 10.1007/s11633-018-1126-y 10.1016/j.patrec.2020.09.030 10.1109/TVT.2019.2895651 10.1109/CVPR.2017.195 10.1109/TVT.2019.2949603 10.1049/iet-its.2018.5618 10.1109/MSPEC.2016.7419800 10.1109/COMST.2018.2869360 10.1016/j.neucom.2019.01.090 10.1109/TIP.2019.2913079 10.1007/s11042-019-08239-z 10.1049/et.2017.0201 10.1109/JIOT.2020.3022353 10.1007/s10278-020-00371-9 10.1109/ACCESS.2020.3009782 10.1109/CVPR42600.2020.00271 10.1016/j.cageo.2018.12.007 10.1109/CVPR.2018.00352 10.1109/ChiCC.2014.6896712 10.1109/TIM.2019.2941292 10.1142/S0218001420540245 10.1007/s11831-018-09312-w 10.3390/info11120583 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SP 7U5 8FD L7M |
DOI | 10.1109/TIM.2022.3146923 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library Online CrossRef Electronics & Communications Abstracts Solid State and Superconductivity Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
DatabaseTitle | CrossRef Solid State and Superconductivity Abstracts Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
DatabaseTitleList | Solid State and Superconductivity Abstracts |
Database_xml | – sequence: 1 dbid: ESBDL name: IEEE Xplore Open Access Journals url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Physics |
EISSN | 1557-9662 |
EndPage | 14 |
ExternalDocumentID | 10_1109_TIM_2022_3146923 9694594 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61873086; 61903123 funderid: 10.13039/501100001809 – fundername: Natural Science Foundation of Jiangsu Province grantid: BK20190165 funderid: 10.13039/501100004608 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 85S 8WZ 97E A6W AAJGR AASAJ AAYOK ABQJQ ABVLG ACGFO ACIWK ACNCT AENEX AETIX AI. AIBXA AKJIK ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD ESBDL F5P HZ~ H~9 IAAWW IBMZZ ICLAB IDIHD IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RIG RNS TN5 TWZ VH1 VJK XFK AAYXX CITATION 7SP 7U5 8FD L7M |
ID | FETCH-LOGICAL-c333t-f2a09ff72a28eb53fe9249223e3ffe7672f2f6696d8c16597f19a4ebeba0fb0d3 |
IEDL.DBID | ESBDL |
ISSN | 0018-9456 |
IngestDate | Thu Oct 17 20:47:32 EDT 2024 Fri Aug 23 02:16:47 EDT 2024 Wed Jun 26 19:25:51 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c333t-f2a09ff72a28eb53fe9249223e3ffe7672f2f6696d8c16597f19a4ebeba0fb0d3 |
ORCID | 0000-0002-6888-7993 0000-0002-7130-8331 0000-0002-5575-1069 0000-0003-2852-0078 0000-0002-0394-9639 |
OpenAccessLink | https://ieeexplore.ieee.org/document/9694594 |
PQID | 2633042382 |
PQPubID | 85462 |
PageCount | 14 |
ParticipantIDs | ieee_primary_9694594 proquest_journals_2633042382 crossref_primary_10_1109_TIM_2022_3146923 |
PublicationCentury | 2000 |
PublicationDate | 20220000 2022-00-00 20220101 |
PublicationDateYYYYMMDD | 2022-01-01 |
PublicationDate_xml | – year: 2022 text: 20220000 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on instrumentation and measurement |
PublicationTitleAbbrev | TIM |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref15 ref14 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 |
References_xml | – ident: ref3 doi: 10.1109/SGCF.2017.7947614 – ident: ref11 doi: 10.1049/iet-its.2018.5144 – ident: ref22 doi: 10.3390/app11125456 – ident: ref36 doi: 10.1109/CVPR.2015.7298594 – ident: ref38 doi: 10.1016/j.neucom.2021.08.104 – ident: ref50 doi: 10.3390/a14040114 – ident: ref18 doi: 10.1109/ICCV.2005.152 – ident: ref32 doi: 10.1007/s11263-019-01275-0 – ident: ref17 doi: 10.14257/ijsip.2015.8.2.28 – ident: ref51 doi: 10.1109/ACCESS.2019.2946000 – ident: ref54 doi: 10.1109/CVPR42600.2020.00889 – ident: ref49 doi: 10.1109/ICIIP47207.2019.8985925 – ident: ref20 doi: 10.1109/TIM.2019.2954722 – ident: ref25 doi: 10.1016/j.neucom.2016.11.023 – ident: ref34 doi: 10.1109/ACPEE51499.2021.9437086 – ident: ref16 doi: 10.1109/83.892448 – ident: ref31 doi: 10.1016/j.neucom.2017.09.098 – ident: ref4 doi: 10.1126/scirobotics.aav9843 – ident: ref55 doi: 10.1109/TIM.2021.3096284 – ident: ref30 doi: 10.3837/tiis.2019.04.003 – ident: ref39 doi: 10.1080/2150704X.2021.1966119 – ident: ref45 doi: 10.1155/2020/8817849 – ident: ref33 doi: 10.1109/TPAMI.2016.2577031 – ident: ref26 doi: 10.1109/TIP.2017.2675339 – ident: ref42 doi: 10.1016/j.tra.2018.05.005 – ident: ref6 doi: 10.3390/app10082749 – ident: ref13 doi: 10.1016/j.neucom.2019.10.116 – ident: ref43 doi: 10.1007/s11042-019-07933-2 – ident: ref48 doi: 10.1109/ACCESS.2021.3051085 – ident: ref2 doi: 10.1109/TIM.2016.2514780 – ident: ref19 doi: 10.1016/j.cviu.2019.102805 – ident: ref40 doi: 10.1117/1.JRS.14.044512 – ident: ref12 doi: 10.1007/s11633-018-1126-y – ident: ref29 doi: 10.1016/j.patrec.2020.09.030 – ident: ref41 doi: 10.1109/TVT.2019.2895651 – ident: ref37 doi: 10.1109/CVPR.2017.195 – ident: ref10 doi: 10.1109/TVT.2019.2949603 – ident: ref15 doi: 10.1049/iet-its.2018.5618 – ident: ref5 doi: 10.1109/MSPEC.2016.7419800 – ident: ref7 doi: 10.1109/COMST.2018.2869360 – ident: ref46 doi: 10.1016/j.neucom.2019.01.090 – ident: ref23 doi: 10.1109/TIP.2019.2913079 – ident: ref14 doi: 10.1007/s11042-019-08239-z – ident: ref8 doi: 10.1049/et.2017.0201 – ident: ref27 doi: 10.1109/JIOT.2020.3022353 – ident: ref47 doi: 10.1007/s10278-020-00371-9 – ident: ref24 doi: 10.1109/ACCESS.2020.3009782 – ident: ref52 doi: 10.1109/CVPR42600.2020.00271 – ident: ref44 doi: 10.1016/j.cageo.2018.12.007 – ident: ref53 doi: 10.1109/CVPR.2018.00352 – ident: ref9 doi: 10.1109/ChiCC.2014.6896712 – ident: ref28 doi: 10.1109/TIM.2019.2941292 – ident: ref1 doi: 10.1142/S0218001420540245 – ident: ref21 doi: 10.1007/s11831-018-09312-w – ident: ref35 doi: 10.3390/info11120583 |
SSID | ssj0007647 |
Score | 2.588768 |
Snippet | A self-driving car is a hot research topic in the field of the intelligent transportation system, which can greatly alleviate traffic jams and improve travel... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Publisher |
StartPage | 1 |
SubjectTerms | Artificial neural networks Automobiles Autonomous automobiles Autonomous cars Autonomous vehicles Classification Datasets Decision making Deep network faster region with convolutional neural network features (RCNN) Feature extraction feature fusion Image recognition Intelligent transportation systems Machine learning Object recognition Redundancy Roads scene classification self-driving car Semantics Traffic congestion Traffic jams Transportation networks Visualization |
Title | An Improved Deep Network-Based Scene Classification Method for Self-Driving Cars |
URI | https://ieeexplore.ieee.org/document/9694594 https://www.proquest.com/docview/2633042382 |
Volume | 71 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB50RdCDb3F9kYMXwWqa1LQ5rnZFQRehCt5Kmk5AkK647v93knYXRS_eSmlLyZd5fJN5AJyoNDaxQYycklWUuJR7kdKREdZW2sgqS31x8m2Rjl6yfOjb5JzNa2EQMSSf4bm_DGf59dhOfajsQiudXOpkEZZESn5zD5aGxVV-P9e8dLPtkRmTEJNnMDuW5Pri6e6ByKAQxFGJEAr5wwyFuSq_lHGwMDfr__u3DVjrPEk2aKHfhAVstmD1W3_BLVgO-Z12sg2Pg4a18QOsWY74zkZt_nd0RWasZoUlncfChEyfOxTgYg9hujQjt5YV-Oai_OPVhx_YNZHhHXi-GT5d30bdMIXISik_IycM9_FZYUSG1aV06JkXOQconcNUpcIJp5RWdWZjRTTDxdokBHFluKt4LXeh14wb3APmLMeai4onMZK_V1eJVSpFZYgLShdjH05na1u-tz0zysA1uC4Jh9LjUHY49GHbr-X8uW4Z-3A4A6PshGpSChWCLzIT-3-_dQAr_ttthOQQep8fUzyCxUk9Pe62ynEo8PsC4HG9aw |
link.rule.ids | 315,782,786,798,4028,27642,27932,27933,27934,54767,54942 |
linkProvider | IEEE |
linkToHtml | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD7ohqgP3sXp1Dz4IliXJl3aPOoubLgNYRN8K2l6AoJM2eX_m6TdUPTFtz60tOTruXxfTs4BuBFxqEKFGBjBsyAyMXUmJQPFtM6k4lkSu8PJvXE8ek3aHdcm5259FgYRffEZ3rtLv5eff-ilk8oaUsioKaNNqFpWE7EKVDvjx_Zg7XljERU9MkNrxDYzWG1LUtmY9IeWDDJmOaolhIz_CEN-rsovZ-wjTHf_f992AHtlJkkeCugPYQOnR7D7rb_gEWz5-k49P4bnhykp9APMSRvxk4yK-u_g0YaxnIy19XnET8h0tUMeLjL006WJTWvJGN9N0J69OfmBtCwZPoGXbmfS6gXlMIVAc84XgWGKOn2WKZZg1uQGHfOyyQFyYzAWMTPMCCFFnuhQWJphQqkiC3GmqMlozk-hMv2Y4hkQoynmlGU0CtHme3kWaSFiFMpyQW5CrMHtam3Tz6JnRuq5BpWpxSF1OKQlDjU4dmu5vq9cxhrUV2CkpVHNUya8-MITdv73U9ew3ZsMB-mgP3q6gB33nkItqUNlMVviJWzO8-VV-dt8AW-uv1c |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=An+Improved+Deep+Network-Based+Scene+Classification+Method+for+Self-Driving+Cars&rft.jtitle=IEEE+transactions+on+instrumentation+and+measurement&rft.au=Ni%2C+Jianjun&rft.au=Shen%2C+Kang&rft.au=Chen%2C+Yinan&rft.au=Cao%2C+Weidong&rft.date=2022&rft.pub=IEEE&rft.issn=0018-9456&rft.eissn=1557-9662&rft.volume=71&rft.spage=1&rft.epage=14&rft_id=info:doi/10.1109%2FTIM.2022.3146923&rft.externalDocID=9694594 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0018-9456&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0018-9456&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0018-9456&client=summon |