Kidney and Kidney Tumour Segmentation from 3D CT Scan using DeepLabv3

Segmenting kidney and kidney tumour from CT scan is crucial in combating challenges in the early detection of kidney cancer. Several segmentation methods are available to segment kidney and kidney tumour from 3D CT scan. However, these methods pose several drawbacks, including dependency on pixel-wi...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings (IEEE Region 10 Symposium. Online) pp. 1 - 6
Main Authors: Jariwala, Taashna A., Mehta, Pranavi C., Mehta, Mayuri A., Joshi, Vivek C.
Format: Conference Proceeding
Language:English
Published: IEEE 27-09-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Segmenting kidney and kidney tumour from CT scan is crucial in combating challenges in the early detection of kidney cancer. Several segmentation methods are available to segment kidney and kidney tumour from 3D CT scan. However, these methods pose several drawbacks, including dependency on pixel-wise classification, limited generalisation, and manual annotation requirements. Hence, this paper introduces a novel kidney and kidney tumour segmentation approach employing encoder-decoder-based architecture. The proposed segmentation approach is assessed against two encoder-decoder-based architectures, namely U-Net and DeepLabv3+. The proposed approach precisely identifies kidney and kidney tumour in a 3D CT scan. Its performance is analysed using the 2023 Kidney and Kidney Tumour Segmentation Challenge (KiTS23) dataset. The evaluation metrics such as dice coefficient and Intersection over Union (IoU) are used to assess the performance. Our results on the KiTS23 dataset show that DeepLabv3+ outperforms U-Net. Thus, the paper discusses the DeepLabv3+ approach in detail. DeepLabv3+ boasts an average improvement of 0.82% in dice coefficient, 1.60% in IoU, 39.28% in loss during training, and 0.94% in dice coefficient, 1.82% in IoU, 44.88% in loss during validation over U-Net.
ISSN:2642-6102
DOI:10.1109/TENSYMP61132.2024.10752238