Prototype-Guided Graph Reasoning Network for Few-Shot Medical Image Segmentation

Few-shot semantic segmentation (FSS) is of tremendous potential for data-scarce scenarios, particularly in medical segmentation tasks with merely a few labeled data. Most of the existing FSS methods typically distinguish query objects with the guidance of support prototypes. However, the variances i...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on medical imaging Vol. PP; p. 1
Main Authors: Huang, Wendong, Hu, Jinwu, Xiao, Junhao, Wei, Yang, Bi, Xiuli, Xiao, Bin
Format: Journal Article
Language:English
Published: United States IEEE 13-09-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Few-shot semantic segmentation (FSS) is of tremendous potential for data-scarce scenarios, particularly in medical segmentation tasks with merely a few labeled data. Most of the existing FSS methods typically distinguish query objects with the guidance of support prototypes. However, the variances in appearance and scale between support and query objects from the same anatomical class are often exceedingly considerable in practical clinical scenarios, thus resulting in undesirable query segmentation masks. To tackle the aforementioned challenge, we propose a novel prototype-guided graph reasoning network (PGRNet) to explicitly explore potential contextual relationships in structured query images. Specifically, a prototype-guided graph reasoning module is proposed to perform information interaction on the query graph under the guidance of support prototypes to fully exploit the structural properties of query images to overcome intra-class variances. Moreover, instead of fixed support prototypes, a dynamic prototype generation mechanism is devised to yield a collection of dynamic support prototypes by mining rich contextual information from support images to further boost the efficiency of information interaction between support and query branches. Equipped with the proposed two components, PGRNet can learn abundant contextual representations for query images and is therefore more resilient to object variations. We validate our method on three publicly available medical segmentation datasets, namely CHAOS-T2, MS-CMRSeg, and Synapse. Experiments indicate that the proposed PGRNet outperforms previous FSS methods by a considerable margin and establishes a new state-of-the-art performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0278-0062
1558-254X
1558-254X
DOI:10.1109/TMI.2024.3459943