Exact Diffusion for Distributed Optimization and Learning-Part I: Algorithm Development
This paper develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II of this paper to have a wider stability range and superior convergence performance than the...
Saved in:
Published in: | IEEE transactions on signal processing Vol. 67; no. 3; pp. 708 - 723 |
---|---|
Main Authors: | , , , |
Format: | Journal Article |
Language: | English |
Published: |
New York
IEEE
01-02-2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II of this paper to have a wider stability range and superior convergence performance than the EXTRA strategy. The exact diffusion method is applicable to locally balanced left-stochastic combination matrices which, compared to the conventional doubly stochastic matrix, are more general and able to endow the algorithm with faster convergence rates, more flexible step-size choices, and improved privacy-preserving properties. The derivation of the exact diffusion strategy relies on reformulating the aggregate optimization problem as a penalized problem and resorting to a diagonally weighted incremental construction. Detailed stability and convergence analyses are pursued in Part II of this paper and are facilitated by examining the evolution of the error dynamics in a transformed domain. Numerical simulations illustrate the theoretical conclusions. |
---|---|
ISSN: | 1053-587X 1941-0476 |
DOI: | 10.1109/TSP.2018.2875898 |