Alzheimer's Patient Analysis Using Image and Gene Expression Data and Explainable-AI to Present Associated Genes

There are more than 10 million new cases of Alzheimer's patients worldwide each year, which means there is a new case every 3.2 s. Alzheimer's disease (AD) is a progressive neurodegenerative disease and various machine learning (ML) and image processing methods have been used to detect it....

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement Vol. 70; pp. 1 - 7
Main Authors: Kamal, Md. Sarwar, Northcote, Aden, Chowdhury, Linkon, Dey, Nilanjan, Crespo, Ruben Gonzalez, Herrera-Viedma, Enrique
Format: Journal Article
Language:English
Published: New York IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:There are more than 10 million new cases of Alzheimer's patients worldwide each year, which means there is a new case every 3.2 s. Alzheimer's disease (AD) is a progressive neurodegenerative disease and various machine learning (ML) and image processing methods have been used to detect it. In this study, we used ML methods to classify AD using image and gene expression data. First, SpinalNet and convolutional neural network (CNN) were used to classify AD from MRI images. Then we used microarray gene expression data to classify the diseases using k-nearest neighbors (KNN), support vector classifier (SVC), and Xboost classifiers. Previous approaches used only either images or gene expression, while we used both data together and also explained the results using trustworthy methods. it was difficult to understand how the classifiers predicted the diseases and genes. It would be useful if the results of these classifiers could be explained in a trustworthy way. To establish trustworthy predictive modeling, we introduced an explainable artificial intelligence (XAI) method. The XAI approach we used here is local interpretable model-agnostic explanations (LIME) for a simple human interpretation. LIME interprets how genes were predicted and which genes are particularly responsible for an AD patient. The accuracy of CNN is 97.6%, which is 10.96% higher than the SpinlNet approach. When analyzing gene expression data, SVC provides higher accuracy than other approaches. LIME shows how genes were selected for a particular AD patient and the most important genes for that patient were determined from the gene expression data.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2021.3107056