OCUCFormer: An Over-Complete Under-Complete Transformer Network for accelerated MRI reconstruction
Many deep learning-based architectures have been proposed for accelerated Magnetic Resonance Imaging (MRI) reconstruction. However, existing encoder-decoder-based popular networks have a few shortcomings: (1) They focus on the anatomy structure at the expense of fine details, hindering their perform...
Saved in:
Published in: | Image and vision computing Vol. 150; p. 105228 |
---|---|
Main Authors: | , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
01-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many deep learning-based architectures have been proposed for accelerated Magnetic Resonance Imaging (MRI) reconstruction. However, existing encoder-decoder-based popular networks have a few shortcomings: (1) They focus on the anatomy structure at the expense of fine details, hindering their performance in generating faithful reconstructions; (2) Lack of long-range dependencies yields sub-optimal recovery of fine structural details. In this work, we propose an Over-Complete Under-Complete Transformer network (OCUCFormer) which focuses on better capturing fine edges and details in the image and can extract the long-range relations between these features for improved single-coil (SC) and multi-coil (MC) MRI reconstruction. Our model computes long-range relations in the highest resolutions using Restormer modules for improved acquisition and restoration of fine anatomical details. Towards learning in the absence of fully sampled ground truth for supervision, we show that our model trained with under-sampled data in a self-supervised fashion shows a superior recovery of fine structures compared to other works. We have extensively evaluated our network for SC and MC MRI reconstruction on brain, cardiac, and knee anatomies for 4× and 5× acceleration factors. We report significant improvements over popular deep learning-based methods when trained in supervised and self-supervised modes. We have also performed experiments demonstrating the strengths of extracting fine details and the anatomical structure and computing long-range relations within over-complete representations. Code for our proposed method is available at: https://github.com/alfahimmohammad/OCUCFormer-main.
[Display omitted]
•We propose an Over-Complete Under-Complete Transformer Network for accelerated MRI reconstruction.•Enhanced extraction of (i) fine details via restricted growth of receptive fields using the Over-Complete transformer network and (ii) global anatomical structures via the Under-Complete transformer network.•Captures distant contextual relationships within the extracted fine anatomical details in various resolution levels to remove under-sampling artifacts.•Significant improvements over popular deep learning networks on brain, cardiac, and knee anatomies for single-coil and multi-coil MRI reconstruction when trained in supervised and self-supervised learning modes. |
---|---|
ISSN: | 0262-8856 |
DOI: | 10.1016/j.imavis.2024.105228 |