Computer Vision and Machine-Learning Techniques for Automatic 3D Virtual Images Overlapping During Augmented Reality Guided Robotic Partial Nephrectomy

The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention. Precision medicine, especially in the field of minimally-invasive partial n...

Full description

Saved in:
Bibliographic Details
Published in:Technology in cancer research & treatment Vol. 23; p. 15330338241229368
Main Authors: Amparore, Daniele, Sica, Michele, Verri, Paolo, Piramide, Federico, Checcucci, Enrico, De Cillis, Sabrina, Piana, Alberto, Campobasso, Davide, Burgio, Mariano, Cisero, Edoardo, Busacca, Giovanni, Di Dio, Michele, Piazzolla, Pietro, Fiori, Cristian, Porpiglia, Francesco
Format: Journal Article
Language:English
Published: United States SAGE Publications 01-01-2024
SAGE Publishing
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention. Precision medicine, especially in the field of minimally-invasive partial nephrectomy, aims to use 3D virtual models as a guidance for augmented reality robotic procedures. However, the co-registration process of the virtual images over the real operative field is performed manually. In this prospective study, two strategies for the automatic overlapping of the model over the real kidney were explored: the computer vision technology, leveraging the super-enhancement of the kidney allowed by the intraoperative injection of Indocyanine green for superimposition and the convolutional neural network technology, based on the processing of live images from the endoscope, after a training of the software on frames from prerecorded videos of the same surgery. The work-team, comprising a bioengineer, a software-developer and a surgeon, collaborated to create hyper-accuracy 3D models for automatic 3D-AR-guided RAPN. For each patient, demographic and clinical data were collected. Two groups (group A for the first technology with 12 patients and group B for the second technology with 8 patients) were defined. They showed comparable preoperative and post-operative characteristics. Concerning the first technology the average co-registration time was 7 (3-11) seconds while in the case of the second technology 11 (6-13) seconds. No major intraoperative or postoperative complications were recorded. There were no differences in terms of functional outcomes between the groups at every time-point considered. The first technology allowed a successful anchoring of the 3D model to the kidney, despite minimal manual refinements. The second technology improved kidney automatic detection without relying on indocyanine injection, resulting in better organ boundaries identification during tests. Further studies are needed to confirm this preliminary evidence.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
These Authors contributed equally to the Senior Authorship.
These Authors contributed equally to this work.
ISSN:1533-0346
1533-0338
DOI:10.1177/15330338241229368