Advanced centralized and distributed SVM models over different IoT levels for edge layer intelligence and control

In this era, internet-of-things (IoT) deal with billions of edge devices potentially connected to each other. Maximum applications built on these edge devices generate a massive amount of online data and also require real-time computation and decision making with low latency (e.g., robotics/ drones,...

Full description

Saved in:
Bibliographic Details
Published in:Evolutionary intelligence Vol. 15; no. 1; pp. 481 - 495
Main Authors: Pattnaik, Bhawani Shankar, Pattanayak, Arunima Sambhuta, Udgata, Siba Kumar, Panda, Ajit Kumar
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01-03-2022
Springer Nature B.V
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this era, internet-of-things (IoT) deal with billions of edge devices potentially connected to each other. Maximum applications built on these edge devices generate a massive amount of online data and also require real-time computation and decision making with low latency (e.g., robotics/ drones, self-driving cars, smart IoT, electronics/ wearable devices). To suffice the requirement, the future generation intelligent edge devices need to be capable of computing complex machine learning algorithms on live data in real-time. Considering different layers of IoT and distributed computing concept, this paper suggests three different operational models where the ML algorithm will be executed in a distributed manner between the edge and cloud layer of IoT so that the edge node can take a decision in real-time. The three models are; model 1: training and prediction both will be done locally by the edge, model 2: training at the server and decision making at the edge node, and model 3: distributed training and distributed decision making at the edge level with global shared knowledge and security. All three models have been tested using support vector machine using thirteen diverse datasets to profile their performance in terms of both training and inference time. A comparative study between the computational performance of the edge and cloud nodes is also presented here. Through the simulated experiments using the different datasets, it is observed that, the edge node inference time is approximately ten times faster than cloud time for all tested datasets for each proposed model. At the same time, the model 2 training time is approximately nine times faster than model 1 and eleven times faster than model 3.
ISSN:1864-5909
1864-5917
DOI:10.1007/s12065-020-00524-3