Automatic Image Guidance for Assessment of Placenta Location in Ultrasound Video Sweeps

Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can...

Full description

Saved in:
Bibliographic Details
Published in:Ultrasound in medicine & biology Vol. 49; no. 1; pp. 106 - 121
Main Authors: Gleed, Alexander D., Chen, Qingchao, Jackman, James, Mishra, Divyanshu, Chandramohan, Varun, Self, Alice, Bhatnagar, Shinjini, Papageorghiou, Aris T., Noble, J. Alison
Format: Journal Article
Language:English
Published: England Elsevier Inc 01-01-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can be scarce. This paper describes a new method that automatically generates an assistive video overlay to provide image guidance to a user to assess placenta location. The user captures US video by following a sweep protocol that scans a U-shape on the lower maternal abdomen. The sweep trajectory is simple and easy to learn. We initially explore a 2-D embedding of placenta shapes, mapping manually segmented placentas in US video frames to a 2-D space. We map 2013 frames from 11 videos. This provides insight into the spectrum of placenta shapes that appear when using the sweep protocol. We propose classification of the placenta shapes from three observed clusters: complex, tip and rectangular. We use this insight to design an effective automatic segmentation algorithm, combining a U-Net with a CRF-RNN module to enhance segmentation performance with respect to placenta shape. The U-Net + CRF-RNN algorithm automatically segments the placenta and maternal bladder. We assess segmentation performance using both area and shape metrics. We report results comparable to the state-of-the-art for automatic placenta segmentation on the Dice metric, achieving 0.83 ± 0.15 evaluated on 2127 frames from 10 videos. We also qualitatively evaluate 78,308 frames from 135 videos, assessing if the anatomical outline is correctly segmented. We found that addition of the CRF-RNN improves over a baseline U-Net when faced with a complex placenta shape, which we observe in our 2-D embedding, up to 14% with respect to the percentage shape error. From the segmentations, an assistive video overlay is automatically constructed that (i) highlights the placenta and bladder, (ii) determines the lower placenta edge and highlights this location as a point and (iii) labels a 2-cm clearance on the lower placenta edge. The 2-cm clearance is chosen to satisfy current clinical guidelines. We propose to assess the placenta location by comparing the 2-cm region and the bottom of the bladder, which represents a coarse localization of the cervix. Anatomically, the bladder must sit above the cervix region. We present proof-of-concept results for the video overlay.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0301-5629
1879-291X
DOI:10.1016/j.ultrasmedbio.2022.08.006