Optimal complexity and certification of Bregman first-order methods

We provide a lower bound showing that the O (1/ k ) convergence rate of the NoLips method (a.k.a. Bregman Gradient or Mirror Descent) is optimal for the class of problems satisfying the relative smoothness assumption. This assumption appeared in the recent developments around the Bregman Gradient me...

Full description

Saved in:
Bibliographic Details
Published in:Mathematical programming Vol. 194; no. 1-2; pp. 41 - 83
Main Authors: Dragomir, Radu-Alexandru, Taylor, Adrien B., d’Aspremont, Alexandre, Bolte, Jérôme
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01-07-2022
Springer
Springer Nature B.V
Springer Verlag
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We provide a lower bound showing that the O (1/ k ) convergence rate of the NoLips method (a.k.a. Bregman Gradient or Mirror Descent) is optimal for the class of problems satisfying the relative smoothness assumption. This assumption appeared in the recent developments around the Bregman Gradient method, where acceleration remained an open issue. The main inspiration behind this lower bound stems from an extension of the performance estimation framework of Drori and Teboulle (Mathematical Programming, 2014) to Bregman first-order methods. This technique allows computing worst-case scenarios for NoLips in the context of relatively-smooth minimization. In particular, we used numerically generated worst-case examples as a basis for obtaining the general lower bound.
ISSN:0025-5610
1436-4646
DOI:10.1007/s10107-021-01618-1