AEDNet: An Attention-Based Encoder-Decoder Network for Urban Water Extraction From High Spatial Resolution Remote Sensing Images
Accurate water extraction from urban remote sensing images holds great significance in assisting the formulation of river and lake management policies and ensuring the sustainable development of urban water resources. However, urban high-resolution remote sensing images encompass complex spatial and...
Saved in:
Published in: | IEEE journal of selected topics in applied earth observations and remote sensing Vol. 17; pp. 1286 - 1298 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
Piscataway
IEEE
2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Accurate water extraction from urban remote sensing images holds great significance in assisting the formulation of river and lake management policies and ensuring the sustainable development of urban water resources. However, urban high-resolution remote sensing images encompass complex spatial and semantic information, which leads to disparities between the extracted water body features based on local and global information, consequently affecting the accuracy of urban water extraction. To tackle this issue, an attention-based encoder-decoder network was proposed. In this network, the backbone employing atrous convolution (AC) facilitated the acquisition of low-level and high-level features of urban remote sensing images at various scales. Integrated with the attention mechanism, the encoder-decoder structure extracted global features in both the spatial and channel domains. Subsequently, these two types of features were merged to yield the urban water segmentation. Moreover, considering both intersection over union and class weights, a joint loss function (JLF) was introduced to further enhance the accuracy of urban water extraction. Experimental results demonstrated the strong performance of the proposed method on both GID and LoveDA datasets. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2023.3338484 |