A Federated Learning Model Based on Hardware Acceleration for the Early Detection of Alzheimer’s Disease

Alzheimer’s disease (AD) is a progressive illness with a slow start that lasts many years; the disease’s consequences are devastating to the patient and the patient’s family. If detected early, the disease’s impact and prognosis can be altered significantly. Blood biosamples are often employed in si...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 23; no. 19; p. 8272
Main Authors: Khalil, Kasem, Khan Mamun, Mohammad Mahbubur Rahman, Sherif, Ahmed, Elsersy, Mohamed Said, Imam, Ahmad Abdel-Aliem, Mahmoud, Mohamed, Alsabaan, Maazen
Format: Journal Article
Language:English
Published: Basel MDPI AG 01-10-2023
MDPI
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Alzheimer’s disease (AD) is a progressive illness with a slow start that lasts many years; the disease’s consequences are devastating to the patient and the patient’s family. If detected early, the disease’s impact and prognosis can be altered significantly. Blood biosamples are often employed in simple medical testing since they are cost-effective and easy to collect and analyze. This research provides a diagnostic model for Alzheimer’s disease based on federated learning (FL) and hardware acceleration using blood biosamples. We used blood biosample datasets provided by the ADNI website to compare and evaluate the performance of our models. FL has been used to train a shared model without sharing local devices’ raw data with a central server to preserve privacy. We developed a hardware acceleration approach for building our FL model so that we could speed up the training and testing procedures. The VHDL hardware description language and an Altera 10 GX FPGA are utilized to construct the hardware-accelerator approach. The results of the simulations reveal that the proposed methods achieve accuracy and sensitivity for early detection of 89% and 87%, respectively, while simultaneously requiring less time to train than other algorithms considered to be state-of-the-art. The proposed algorithms have a power consumption ranging from 35 to 39 mW, which qualifies them for use in limited devices. Furthermore, the result shows that the proposed method has a lower inference latency (61 ms) than the existing methods with fewer resources.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s23198272