Optimized IANSegNet: Deep Segmentation for the Detection of Inferior Alveolar Nerve Canal

Imaging studies in dentistry and maxillofacial pathology have recently concentrated on detecting the inferior alveolar nerve (IAN) canal. In spite of the minor dimensions of 3D maxillofacial datasets, deep learning-based algorithms have shown encouraging consequences in this study area. This study d...

Full description

Saved in:
Bibliographic Details
Published in:BioMed research international Vol. 2023; no. 1
Main Authors: Krishnan, V. Gokula, Navaneethakrishnan, M., Ganesan, Sangeetha, Saradhi, M. V. Vijaya, Hemapriya, K., Selvaraj, D., Deepa, J., Murthy, K. Sreerama, Doss, Srinath
Format: Journal Article
Language:English
Published: New York Hindawi 2023
Hindawi Limited
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Imaging studies in dentistry and maxillofacial pathology have recently concentrated on detecting the inferior alveolar nerve (IAN) canal. In spite of the minor dimensions of 3D maxillofacial datasets, deep learning-based algorithms have shown encouraging consequences in this study area. This study describes a mandibular cone-beam CT (CBCT) dataset with 2D and 3D hand comments. It is huge and freely available. It was possible to utilise this dataset by applying the residual neural network (IANSegNet), which consumed less GPU memory and computational complexity. As an encoder, IANSegNet uses the computationally efficient 3D ShuffleNetV2 network to reduce graphics processing unit (GPU) memory usage and improve efficiency. After that, a decoder with leftover blocks is added to keep the quality high. To address network convergence and data inequity, Dice’s loss and cross-entropy loss were created. Optimized postprocessing techniques are also recommended for fine-tuning the coarse segmentation findings that are generated by IANSegNet. The results of the validation show that IANSegNet outperformed other deep learning models in a variety of criteria.
ISSN:2314-6133
2314-6141
DOI:10.1155/2023/6431692