Communication-efficient and Byzantine-robust distributed learning with statistical guarantee

•Both communication efficiency and robustness for convex distributed learning are taken into accounts simultaneously, which is very rare in existing related literatures.•The current work develops two communication-efficient and robust distributed learning algorithms for convex problems. Particularly...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 137; p. 109312
Main Authors: Zhou, Xingcai, Chang, Le, Xu, Pengfei, Lv, Shaogao
Format: Journal Article
Language:English
Published: Elsevier Ltd 01-05-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Both communication efficiency and robustness for convex distributed learning are taken into accounts simultaneously, which is very rare in existing related literatures.•The current work develops two communication-efficient and robust distributed learning algorithms for convex problems. Particularly, the proposed algorithms are provably robust against Byzantine failures, and also achieve optimal statistical rates for strong convex losses and convex (non-smooth) penalties.•For typical statistical models such as generalized linear models, our results show that statistical errors dominate optimization errors in finite iterations.•Simulated and real data experiments are conducted to demonstrate the comparable performance of our algorithms. Communication efficiency and robustness are two major issues in modern distributed learning frameworks. This is due to the practical situations where some computing nodes may have limited communication power or may behave adversarial behaviors. To address the two issues simultaneously, this paper develops two communication-efficient and robust distributed learning algorithms for convex problems. Our motivation is based on surrogate likelihood framework and the median and trimmed mean operations. Particularly, the proposed algorithms are provably robust against Byzantine failures, and also achieve optimal statistical rates for strong convex losses and convex (non-smooth) penalties. For typical statistical models such as generalized linear models, our results show that statistical errors dominate optimization errors in finite iterations. Simulated and real data experiments are conducted to demonstrate the numerical performance of our algorithms.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109312