Toward explainable AI-empowered cognitive health assessment

Explainable artificial intelligence (XAI) is of paramount importance to various domains, including healthcare, fitness, skill assessment, and personal assistants, to understand and explain the decision-making process of the artificial intelligence (AI) model. Smart homes embedded with smart devices...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in public health Vol. 11; p. 1024195
Main Authors: Javed, Abdul Rehman, Khan, Habib Ullah, Alomari, Mohammad Kamel Bader, Sarwar, Muhammad Usman, Asim, Muhammad, Almadhor, Ahmad S, Khan, Muhammad Zahid
Format: Journal Article
Language:English
Published: Switzerland Frontiers Media S.A 09-03-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Explainable artificial intelligence (XAI) is of paramount importance to various domains, including healthcare, fitness, skill assessment, and personal assistants, to understand and explain the decision-making process of the artificial intelligence (AI) model. Smart homes embedded with smart devices and sensors enabled many context-aware applications to recognize physical activities. This study presents , a novel XAI-empowered human activity recognition (HAR) approach based on key features identified from the data collected from sensors located at different places in a smart home. identifies a set of new features (i.e., the total number of sensors used in a specific activity), as based on weighting criteria. Next, it presents (i.e., mean, standard deviation) to handle the outliers and higher class variance. The proposed is evaluated using machine learning models, namely, random forest (RF), K-nearest neighbor (KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB) and deep learning models such as deep neural network (DNN), convolution neural network (CNN), and CNN-based long short-term memory (CNN-LSTM). Experiments demonstrate the superior performance of using RF classifier over all other machine learning and deep learning models. For explainability, uses Local Interpretable Model Agnostic (LIME) with an RF classifier. achieves 0.96% of F-score for health and dementia classification and 0.95 and 0.97% for activity recognition of dementia and healthy individuals, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Edited by: Karl Schweizer, Goethe University Frankfurt, Germany
This article was submitted to Digital Public Health, a section of the journal Frontiers in Public Health
Reviewed by: Irina Mocanu, Polytechnic University of Bucharest, Romania; Sabina Baraković, University of Sarajevo, Bosnia and Herzegovina
ISSN:2296-2565
2296-2565
DOI:10.3389/fpubh.2023.1024195