DDA-Net: Unsupervised cross-modality medical image segmentation via dual domain adaptation

•We propose DDA-Net, a novel domain adaptation method, for ten categories of complicated unsupervised pixel-wise semantic segmentation, which performs domain adaptation in both feature-space and image-space.•Using a cross-modality auto-encoder, DDA-Net maps cross-modality medical images into a featu...

Full description

Saved in:
Bibliographic Details
Published in:Computer methods and programs in biomedicine Vol. 213; p. 106531
Main Authors: Bian, Xuesheng, Luo, Xiongbiao, Wang, Cheng, Liu, Weiquan, Lin, Xiuhong
Format: Journal Article
Language:English
Published: Ireland Elsevier B.V 01-01-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We propose DDA-Net, a novel domain adaptation method, for ten categories of complicated unsupervised pixel-wise semantic segmentation, which performs domain adaptation in both feature-space and image-space.•Using a cross-modality auto-encoder, DDA-Net maps cross-modality medical images into a feature shared subspace, and effectively releases the structural distortion caused by DCNs trained with insufficient data.•Experiments demonstrate that DDA-Net with dual domain adaptation effectively improves the accuracy for unsupervised segmentation and achieves state-of-the-art performance in cross-modality head and heart image segmentation. [Display omitted] Background and Objective: Deep convolutional networks are powerful tools for single-modality medical image segmentation, whereas generally require semantic labelling or annotation that is laborious and time-consuming. However, domain shift among various modalities critically deteriorates the performance of deep convolutional networks if only trained by single-modality labelling data. Methods: In this paper, we propose an end-to-end unsupervised cross-modality segmentation network, DDA-Net, for accurate medical image segmentation without semantic annotation or labelling on the target domain. To close the domain gap, different images with domain shift are mapped into a shared domain-invariant representation space. In addition, spatial position information, which benefits the spatial structure consistency for semantic information, is preserved by an introduced cross-modality auto-encoder. Results: We validated the proposed DDA-Net method on cross-modality medical image datasets of brain images and heart images. The experimental results show that DDA-Net effectively alleviates domain shift and suppresses model degradation. Conclusions: The proposed DDA-Net successfully closes the domain gap between different modalities of medical image, and achieves state-of-the-art performance in cross-modality medical image segmentation. It also can be generalized for other semi-supervised or unsupervised segmentation tasks in some other field.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2021.106531