Node Selection Toward Faster Convergence for Federated Learning on Non-IID Data

Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancies between the global and local objectiv...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on network science and engineering Vol. 9; no. 5; pp. 3099 - 3111
Main Authors: Wu, Hongda, Wang, Ping
Format: Journal Article
Language:English
Published: Piscataway IEEE 01-09-2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancies between the global and local objectives, making the FL model slow to converge. In this paper, we proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round, by identifying and excluding the adverse local updates via checking the relationship between the local gradient and the global gradient. Then, we proposed a P robabilistic N ode S election framework ( FedPNS ) to dynamically change the probability for each node to be selected based on the output of Optimal Aggregation . FedPNS can preferentially select nodes that propel faster model convergence. The convergence rate improvement of FedPNS over the commonly adopted Federated Averaging ( FedAvg ) algorithm is analyzed theoretically. Experimental results demonstrate the effectiveness of FedPNS in accelerating the FL convergence rate, as compared to FedAvg with random node selection.
ISSN:2327-4697
2334-329X
DOI:10.1109/TNSE.2022.3146399