Remote Sensing Scene Classification via Multi-Branch Local Attention Network
Remote sensing scene classification (RSSC) is a hotspot and play very important role in the field of remote sensing image interpretation in recent years. With the recent development of the convolutional neural networks, a significant breakthrough has been made in the classification of remote sensing...
Saved in:
Published in: | IEEE transactions on image processing Vol. 31; pp. 99 - 109 |
---|---|
Main Authors: | , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
United States
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Remote sensing scene classification (RSSC) is a hotspot and play very important role in the field of remote sensing image interpretation in recent years. With the recent development of the convolutional neural networks, a significant breakthrough has been made in the classification of remote sensing scenes. Many objects form complex and diverse scenes through spatial combination and association, which makes it difficult to classify remote sensing image scenes. The problem of insufficient differentiation of feature representations extracted by Convolutional Neural Networks (CNNs) still exists, which is mainly due to the characteristics of similarity for inter-class images and diversity for intra-class images. In this paper, we propose a remote sensing image scene classification method via Multi-Branch Local Attention Network (MBLANet), where Convolutional Local Attention Module (CLAM) is embedded into all down-sampling blocks and residual blocks of ResNet backbone. CLAM contains two submodules, Convolutional Channel Attention Module (CCAM) and Local Spatial Attention Module (LSAM). The two submodules are placed in parallel to obtain both channel and spatial attentions, which helps to emphasize the main target in the complex background and improve the ability of feature representation. Extensive experiments on three benchmark datasets show that our method is better than state-of-the-art methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2021.3127851 |