Optimisation of federated learning settings under statistical heterogeneity variations
Federated Learning (FL) enables local devices to collaboratively learn a shared predictive model by only periodically sharing model parameters with a central aggregator. However, FL can be disadvantaged by statistical heterogeneity produced by the diversity in each local devices data distribution, w...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
10-06-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Federated Learning (FL) enables local devices to collaboratively learn a
shared predictive model by only periodically sharing model parameters with a
central aggregator. However, FL can be disadvantaged by statistical
heterogeneity produced by the diversity in each local devices data
distribution, which creates different levels of Independent and Identically
Distributed (IID) data. Furthermore, this can be more complex when optimising
different combinations of FL parameters and choosing optimal aggregation. In
this paper, we present an empirical analysis of different FL training
parameters and aggregators over various levels of statistical heterogeneity on
three datasets. We propose a systematic data partition strategy to simulate
different levels of statistical heterogeneity and a metric to measure the level
of IID. Additionally, we empirically identify the best FL model and key
parameters for datasets of different characteristics. On the basis of these, we
present recommended guidelines for FL parameters and aggregators to optimise
model performance under different levels of IID and with different datasets |
---|---|
DOI: | 10.48550/arxiv.2406.06340 |