Inducing Uniform Asymptotic Stability in Non-Autonomous Accelerated Optimization Dynamics via Hybrid Regularization
There have been many recent efforts to study accelerated optimization algorithms from the perspective of dynamical systems. In this paper, we focus on the robustness properties of the time-varying continuous-time version of these dynamics. These properties are critical for the implementation of acce...
Saved in:
Published in: | 2019 IEEE 58th Conference on Decision and Control (CDC) pp. 3000 - 3005 |
---|---|
Main Authors: | , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
01-12-2019
|
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | There have been many recent efforts to study accelerated optimization algorithms from the perspective of dynamical systems. In this paper, we focus on the robustness properties of the time-varying continuous-time version of these dynamics. These properties are critical for the implementation of accelerated algorithms in feedback-based control and optimization architectures. We show that a family of dynamics related to the continuous-time limit of Nesterov's accelerated gradient method can be rendered unstable under arbitrarily small bounded disturbances. Indeed, while solutions of these dynamics may converge to the set of optimizers, in general, this set may not be uniformly asymptotically stable. To induce uniformity, and robustness as a byproduct, we propose a framework where the dynamics are regularized by using resetting mechanisms that are modeled by well-posed hybrid dynamical systems. For these hybrid dynamics, we establish uniform asymptotic stability and robustness properties, as well as convergence rates that are similar to those of the non-hybrid dynamics. We finish by characterizing a family of discretization mechanisms that retain the main stability and robustness properties of the hybrid algorithms. |
---|---|
ISSN: | 2576-2370 |
DOI: | 10.1109/CDC40024.2019.9030127 |