Federated Learning of Neural Network Models with Heterogeneous Structures
Federated learning trains a model on a centralized server using datasets distributed over a large number of edge devices. Applying federated learning ensures data privacy because it does not transfer local data from edge devices to the server. Existing federated learning algorithms assume that all d...
Saved in:
Published in: | 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) pp. 735 - 740 |
---|---|
Main Authors: | , , , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
01-12-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | Federated learning trains a model on a centralized server using datasets distributed over a large number of edge devices. Applying federated learning ensures data privacy because it does not transfer local data from edge devices to the server. Existing federated learning algorithms assume that all deployed models share the same structure. However, it is often infeasible to distribute the same model to every edge device because of hardware limitations such as computing performance and storage space. This paper proposes a novel federated learning algorithm to aggregate information from multiple heterogeneous models. The proposed method uses weighted average ensemble to combine the outputs from each model. The weight for the ensemble is optimized using black box optimization methods. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to conventional training using centralized datasets. Furthermore, we compared six different optimization methods to tune the weights for the weighted average ensemble and found that tree parzen estimator achieves the highest accuracy among the alternatives. |
---|---|
AbstractList | Federated learning trains a model on a centralized server using datasets distributed over a large number of edge devices. Applying federated learning ensures data privacy because it does not transfer local data from edge devices to the server. Existing federated learning algorithms assume that all deployed models share the same structure. However, it is often infeasible to distribute the same model to every edge device because of hardware limitations such as computing performance and storage space. This paper proposes a novel federated learning algorithm to aggregate information from multiple heterogeneous models. The proposed method uses weighted average ensemble to combine the outputs from each model. The weight for the ensemble is optimized using black box optimization methods. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to conventional training using centralized datasets. Furthermore, we compared six different optimization methods to tune the weights for the weighted average ensemble and found that tree parzen estimator achieves the highest accuracy among the alternatives. |
Author | Takahashi, Keichi Nakasan, Chawanat Ichikawa, Kohei Iida, Hajimu Thonglek, Kundjanasith |
Author_xml | – sequence: 1 givenname: Kundjanasith surname: Thonglek fullname: Thonglek, Kundjanasith email: thonglek.kundjanasith.ti7@is.naist.jp organization: Nara Institute of Science and Technology,Nara,Japan – sequence: 2 givenname: Keichi surname: Takahashi fullname: Takahashi, Keichi email: keichi@is.naist.jp organization: Nara Institute of Science and Technology,Nara,Japan – sequence: 3 givenname: Kohei surname: Ichikawa fullname: Ichikawa, Kohei email: ichikawa@is.naist.jp organization: Nara Institute of Science and Technology,Nara,Japan – sequence: 4 givenname: Hajimu surname: Iida fullname: Iida, Hajimu email: iida@itc.naist.jp organization: Nara Institute of Science and Technology,Nara,Japan – sequence: 5 givenname: Chawanat surname: Nakasan fullname: Nakasan, Chawanat email: chawanat@staff.kanazawa-u.ac.jp organization: Kanazawa University,Japan |
BookMark | eNotjMFOg0AYBtdED7b6BMZkXwD8_2VZ2GNDrG1C9aCemwU-KhHBLEsa396aeprJHGYhLodxgBD3TDEz2YdtsStXKSurY0WKYiJWdCEWnKmcc51Rci22azTwLqCRJZwfuuEgx1Y-Y_auPyEcR_8pd2ODfpLHLnzIDQL8eMCAcZ7ka_BzHWaP6UZcta6fcPvPpXhfP74Vm6h8edoWqzLqmPMQwdaJyyk1Deu2qSxDkUHd_KlJbZWDM1I6U8YSVxY60YbMqaGtYVEnS3F3_nYA9t---3L-Z2-T1Citk18oVEor |
CODEN | IEEPAD |
ContentType | Conference Proceeding |
DBID | 6IE 6IL CBEJK RIE RIL |
DOI | 10.1109/ICMLA51294.2020.00120 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library Online IEEE Proceedings Order Plans (POP All) 1998-Present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library Online url: http://ieeexplore.ieee.org/Xplore/DynWel.jsp sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
EISBN | 1728184703 9781728184708 |
EndPage | 740 |
ExternalDocumentID | 9356244 |
Genre | orig-research |
GroupedDBID | 6IE 6IL CBEJK RIE RIL |
ID | FETCH-LOGICAL-i118t-e9c3a8056d14fdb91e206ecddb91659b8e17024726901b9e434606170efce9ec3 |
IEDL.DBID | RIE |
IngestDate | Thu Jun 29 18:38:27 EDT 2023 |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i118t-e9c3a8056d14fdb91e206ecddb91659b8e17024726901b9e434606170efce9ec3 |
PageCount | 6 |
ParticipantIDs | ieee_primary_9356244 |
PublicationCentury | 2000 |
PublicationDate | 2020-Dec. |
PublicationDateYYYYMMDD | 2020-12-01 |
PublicationDate_xml | – month: 12 year: 2020 text: 2020-Dec. |
PublicationDecade | 2020 |
PublicationTitle | 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) |
PublicationTitleAbbrev | ICMLA |
PublicationYear | 2020 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
Score | 1.7906355 |
Snippet | Federated learning trains a model on a centralized server using datasets distributed over a large number of edge devices. Applying federated learning ensures... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 735 |
SubjectTerms | Collaborative work Computational modeling Decentralized dataset Deep Neural Network Ensemble learning Federated learning Neural networks Optimization methods Servers Task analysis Training |
Title | Federated Learning of Neural Network Models with Heterogeneous Structures |
URI | https://ieeexplore.ieee.org/document/9356244 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NS8NAEB1sT55UWvGbPXh0bTbZzWaPUltaUBGq4K3sx0SE0gg1_9-dbagevHjKspeQScibmX3vDcC1svSXC4IrVRsuKYezQSLXsTawUilrk5B2ttBPb9X9hGxybnZaGERM5DO8pWU6yw-Nb6lVNjJFRGspe9DTptpqtTpRjsjMaD5-fLgj_KJWSU6MLUFTvH8NTUmYMT34390OYfgjvmPPO1g5gj1cD2A-JdOHmBcG1jmivrOmZmStYVfxkrjcjAabrTaMeqtsRjyXJn4eGGt7tkg2sW2srYfwOp28jGe8m4LAP2Ly_8XR-MJWMU8JQtbBGYF5VqIPtCyVcRUKHYFW5zRayhmUhSwpL8mw9mjQF8fQXzdrPAHmcu3jrjUqp2PczFmB6I3UztmQFf4UBhSG5efW6GLZReDs7-1z2Kc4b7kdF9CPD4KX0NuE9iq9mm93GJFu |
link.rule.ids | 310,311,782,786,791,792,798,27934,54767 |
linkProvider | IEEE |
linkToHtml | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwED3RMsAEqEV844GR0Dix43pEpVUq2gqpRWKrHPuCkKoEqfT_43OjwsDCFMtLlEuUd3d-7x3AnTT0l3M8krLUkaAczjiBkfK1gRFSGhOEtPlczd76T0OyybnfaWEQMZDP8IGW4Szf1XZDrbKeTj1aC9GCfSlUprZqrUaWw2PdGw-mk0dCMGqWJMTZ4jTH-9fYlIAao6P_3e8Yuj_yO_ayA5YT2MOqA-MR2T74zNCxxhP1ndUlI3MNs_KXwOZmNNpstWbUXWU5MV1q_4Ggr-7ZPBjFbnx13YXX0XAxyKNmDkL04dP_rwi1TU3fZyqOi9IVmmMSZ2gdLTOpiz5y5aFWJTRcqtAoUpFRZhJjaVGjTU-hXdUVngErEmX9rtEyoYPcuDAc0WqhisK4OLXn0KEwLD-3VhfLJgIXf2_fwkG-mE6Wk_Hs-RIOKeZbpscVtP1D4TW01m5zE17TN07nlL8 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2020+19th+IEEE+International+Conference+on+Machine+Learning+and+Applications+%28ICMLA%29&rft.atitle=Federated+Learning+of+Neural+Network+Models+with+Heterogeneous+Structures&rft.au=Thonglek%2C+Kundjanasith&rft.au=Takahashi%2C+Keichi&rft.au=Ichikawa%2C+Kohei&rft.au=Iida%2C+Hajimu&rft.date=2020-12-01&rft.pub=IEEE&rft.spage=735&rft.epage=740&rft_id=info:doi/10.1109%2FICMLA51294.2020.00120&rft.externalDocID=9356244 |