Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network

Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN...

Full description

Saved in:
Bibliographic Details
Published in:PloS one Vol. 12; no. 3; p. e0168606
Main Authors: Liu, Xiyang, Jiang, Jiewei, Zhang, Kai, Long, Erping, Cui, Jiangtao, Zhu, Mingmin, An, Yingying, Zhang, Jia, Liu, Zhenzhen, Lin, Zhuoling, Li, Xiaoyan, Chen, Jingjing, Cao, Qianzhong, Li, Jing, Wu, Xiaohang, Wang, Dongni, Lin, Haotian
Format: Journal Article
Language:English
Published: United States Public Library of Science 17-03-2017
Public Library of Science (PLoS)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Competing Interests: The authors have declared that no competing interests exist.
Conceptualization: HTL XY. Liu.Data curation: EPL ZLL.Formal analysis: QZC KZ.Funding acquisition: HTL XY. Liu MMZ.Investigation: ZZL XHW.Methodology: JWJ XY. Liu MMZ.Project administration: HTL XY. Liu.Resources: JL DNW.Software: JWJ KZ.Supervision: MMZ JTC.Validation: JZ YYA.Visualization: YYA XY. Li JJC.Writing – original draft: JWJ EPL XY. Liu KZ.Writing – review & editing: JWJ HTL XY. Li.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0168606