Distributed Newton Method for Large-Scale Consensus Optimization

In this paper, we propose a distributed Newton method for decenteralized optimization of large sums of convex functions. Our proposed method is based on creating a set of separable finite sum minimization problems by utilizing a decomposition technique known as Global Consensus that distributes the...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control Vol. 64; no. 10; pp. 3983 - 3994
Main Authors: Tutunov, Rasul, Bou-Ammar, Haitham, Jadbabaie, Ali
Format: Journal Article
Language:English
Published: New York IEEE 01-10-2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we propose a distributed Newton method for decenteralized optimization of large sums of convex functions. Our proposed method is based on creating a set of separable finite sum minimization problems by utilizing a decomposition technique known as Global Consensus that distributes the computation across nodes of a graph and enforces a consensus constraint among the separated variables. The key idea is to exploit the sparsity of the dual Hessian and recast the computation of the Newton step as one of the efficiently solving symmetric diagonally dominant linear equations. We show that our method outperforms the state-of-the-art algorithms, including ADMM. We validate our algorithm both theoretically and empirically. On the theory side, we demonstrate that our algorithm exhibits superlinear convergence within a neighborhood of optimality. Empirically, we show the superiority of this new method on a variety of large-scale optimization problems. The proposed approach is scalable to large problems and has a low communication overhead.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2019.2907711