Comparative study of deep learning models for optical coherence tomography angiography

Optical coherence tomography angiography (OCTA) is a promising imaging modality for microvasculature studies. Meanwhile, deep learning has achieved rapid development in image-to-image translation tasks. Some studies have proposed applying deep learning models to OCTA reconstruction and have obtained...

Full description

Saved in:
Bibliographic Details
Published in:Biomedical optics express Vol. 11; no. 3; pp. 1580 - 1597
Main Authors: Jiang, Zhe, Huang, Zhiyu, Qiu, Bin, Meng, Xiangxi, You, Yunfei, Liu, Xi, Liu, Gangjun, Zhou, Chuangqing, Yang, Kun, Maier, Andreas, Ren, Qiushi, Lu, Yanye
Format: Journal Article
Language:English
Published: United States Optical Society of America 01-03-2020
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Optical coherence tomography angiography (OCTA) is a promising imaging modality for microvasculature studies. Meanwhile, deep learning has achieved rapid development in image-to-image translation tasks. Some studies have proposed applying deep learning models to OCTA reconstruction and have obtained preliminary results. However, current studies are mostly limited to a few specific deep neural networks. In this paper, we conducted a comparative study to investigate OCTA reconstruction using deep learning models. Four representative network architectures including single-path models, U-shaped models, generative adversarial network (GAN)-based models and multi-path models were investigated on a dataset of OCTA images acquired from rat brains. Three potential solutions were also investigated to study the feasibility of improving performance. The results showed that U-shaped models and multi-path models are two suitable architectures for OCTA reconstruction. Furthermore, merging phase information should be the potential improving direction in further research.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2156-7085
2156-7085
DOI:10.1364/BOE.387807