Explainable AI for Medical Image Analysis in Medical Cyber-Physical Systems: Enhancing Transparency and Trustworthiness of IoMT
Medical image analysis plays a crucial role in healthcare systems of Internet of Medical Things (IoMT), aiding in the diagnosis, treatment planning, and monitoring of various diseases. With the increasing adoption of artificial intelligence (AI) techniques in medical image analysis, there is a growi...
Saved in:
Published in: | IEEE journal of biomedical and health informatics Vol. PP; pp. 1 - 12 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
United States
IEEE
27-11-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Medical image analysis plays a crucial role in healthcare systems of Internet of Medical Things (IoMT), aiding in the diagnosis, treatment planning, and monitoring of various diseases. With the increasing adoption of artificial intelligence (AI) techniques in medical image analysis, there is a growing need for transparency and trustworthiness in decision-making. This study explores the application of explainable AI (XAI) in the context of medical image analysis within medical cyber-physical systems (MCPS) to enhance transparency and trustworthiness. To this end, this study proposes an explainable framework that integrates machine learning and knowledge reasoning. The explainability of the model is realized when the framework evolution target feature results and reasoning results are the same and are relatively reliable. However, using these technologies also presents new challenges, including the need to ensure the security and privacy of patient data from IoMT. Therefore, attack detection is an essential aspect of MCPS security. For the MCPS model with only sensor attacks, the necessary and sufficient conditions for detecting attacks are given based on the definition of sparse observability. The corresponding attack detector and state estimator are designed by assuming that some IoMT sensors are under protection. It is expounded that the IoMT sensors under protection play an important role in improving the efficiency of attack detection and state estimation. The experimental results show that the XAI in the context of medical image analysis within MCPS improves the accuracy of lesion classification, effectively removes low-quality medical images, and realizes the explainability of recognition results. This helps doctors understand the logic of the system's decision-making and can choose whether to trust the results based on the explanation given by the framework. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2168-2194 2168-2208 |
DOI: | 10.1109/JBHI.2023.3336721 |