An implicit gradient-descent procedure for minimax problems
A game theory inspired methodology is proposed for finding a function's saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player's optimal strategy into account. The impl...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
01-06-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | A game theory inspired methodology is proposed for finding a function's
saddle points. While explicit descent methods are known to have severe
convergence issues, implicit methods are natural in an adversarial setting, as
they take the other player's optimal strategy into account. The implicit scheme
proposed has an adaptive learning rate that makes it transition to Newton's
method in the neighborhood of saddle points. Convergence is shown through local
analysis and, in non convex-concave settings, thorough numerical examples in
optimal transport and linear programming. An ad-hoc quasi Newton method is
developed for high dimensional problems, for which the inversion of the Hessian
of the objective function may entail a high computational cost. |
---|---|
DOI: | 10.48550/arxiv.1906.00233 |