Securing Android IoT devices with GuardDroid transparent and lightweight malware detection
The Internet of Things (IoT) has experienced significant growth in recent years and has emerged as a very dynamic sector in the worldwide market. Being an open-source platform with a substantial user base, Android has not only been a driving force in the swift advancement of the IoT but has also gar...
Saved in:
Published in: | Ain Shams Engineering Journal Vol. 15; no. 5; p. 102642 |
---|---|
Main Authors: | , , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier
01-05-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Internet of Things (IoT) has experienced significant growth in recent years and has emerged as a very dynamic sector in the worldwide market. Being an open-source platform with a substantial user base, Android has not only been a driving force in the swift advancement of the IoT but has also garnered attention from malicious actors, leading to malware attacks. Given the rapid proliferation of Android malware in recent times, there is an urgent requirement to introduce practical techniques for the detection of such malware. While current machine learning-based Android malware detection approaches have shown promising results, the majority of these methods demand extensive time and effort from malware analysts to construct dynamic or static features. Consequently, the practical application of these methods becomes challenging. Therefore, this paper presents an Android malware detection system characterized by its lightweight design and reliance on explainable machine-learning techniques. The system uses features extracted from mobile applications (apps) to distinguish between malicious and benign apps. Through extensive testing, it has exhibited exceptional accuracy and an F1-score surpassing 0.99 while utilizing minimal device resources and presenting negligible false positive and false negative rates. Furthermore, the classifier model's transparency and comprehensibility are significantly augmented through the application of Shapley's additive explanation scores, enhancing the overall interpretability of the system. |
---|---|
ISSN: | 2090-4479 |
DOI: | 10.1016/j.asej.2024.102642 |