Enhanced Prototypical Network with Customized Region-Aware Convolution for Few-Shot SAR ATR
With the prosperous development and successful application of deep learning technologies in the field of remote sensing, numerous deep-learning-based methods have emerged for synthetic aperture radar (SAR) automatic target recognition (ATR) tasks over the past few years. Generally, most deep-learnin...
Saved in:
Published in: | Remote sensing (Basel, Switzerland) Vol. 16; no. 19; p. 3563 |
---|---|
Main Authors: | , , , |
Format: | Journal Article |
Language: | English |
Published: |
Basel
MDPI AG
01-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | With the prosperous development and successful application of deep learning technologies in the field of remote sensing, numerous deep-learning-based methods have emerged for synthetic aperture radar (SAR) automatic target recognition (ATR) tasks over the past few years. Generally, most deep-learning-based methods can achieve outstanding recognition performance on the condition that an abundance of labeled samples are available to train the model. However, in real application scenarios, it is difficult and costly to acquire and to annotate abundant SAR images due to the imaging mechanism of SAR, which poses a big challenge to existing SAR ATR methods. Therefore, SAR target recognition in the situation of few-shot, where only a scarce few labeled samples are available, is a fundamental problem that needs to be solved. In this paper, a new method named enhanced prototypical network with customized region-aware convolution (CRCEPN) is put forward to specially tackle the few-shot SAR ATR tasks. To be specific, a feature-extraction network based on a customized and region-aware convolution is first developed. This network can adaptively adjust convolutional kernels and their receptive fields according to each SAR image’s own characteristics as well as the semantical similarity among spatial regions, thus augmenting its capability to extract more informative and discriminative features. To achieve accurate and robust target identity prediction under the few-shot condition, an enhanced prototypical network is proposed. This network can improve the representation ability of the class prototype by properly making use of training and test samples together, thus effectively raising the classification accuracy. Meanwhile, a new hybrid loss is designed to learn a feature space with both inter-class separability and intra-class tightness as much as possible, which can further upgrade the recognition performance of the proposed method. Experiments performed on the moving and stationary target acquisition and recognition (MSTAR) dataset, the OpenSARShip dataset, and the SAMPLE+ dataset demonstrate that the proposed method is competitive with some state-of-the-art methods for few-shot SAR ATR tasks. |
---|---|
ISSN: | 2072-4292 2072-4292 |
DOI: | 10.3390/rs16193563 |