A Novel Dual-Branch Global and Local Feature Extraction Network for SAR and Optical Image Registration
The registration of synthetic aperture radar (SAR) and optical images is significant in obtaining their complementary information, which is a key prerequisite for image fusion. However, the inherent differences between the two modalities pose a challenge to the existing deep-learning algorithms that...
Saved in:
Published in: | IEEE journal of selected topics in applied earth observations and remote sensing Vol. 17; pp. 17637 - 17650 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Piscataway
IEEE
2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The registration of synthetic aperture radar (SAR) and optical images is significant in obtaining their complementary information, which is a key prerequisite for image fusion. However, the inherent differences between the two modalities pose a challenge to the existing deep-learning algorithms that only depend on local features. To address this problem, we propose a global and local feature extraction network (GLFE-Net) for SAR and optical image registration. Beyond merely extracting local features to generate feature descriptors, more importantly, the network also extracts the global feature to better mine the common structural features between SAR and optical images. First, considering the ability of the transformer for long-range dependence and the advantages of convolutional neural network for local feature extraction, GLFE-Net designs two branches: one uses an attention-discarded global feature extraction transformer (ADG Transformer) to extract global features from the entire SAR and optical images, and another uses a partial convolution unit with parameter-free attention mechanism (PCU-AM) to extract local features from the patches around the keypoints. Then, the output features of the two branches are fused to obtain the final feature descriptor with more comprehensive features. Second, in order to establish accurate keypoints correspondence in the matching phase, hard-mean loss function is proposed to optimize GLFE-Net, which jointly utilizes hard negative samples and remaining negative samples to guide descriptors in learning more discriminant features to better distinguish positive and negative samples in SAR and optical images. Finally, the experimental results on the publicly available OS dataset and WHU-SEN-City demonstrate that our proposed GLFE-Net is superior to existing state-of-the-art methods with average root-mean-square error (aRMSE) reduced by 0.14. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2024.3435684 |