An implicit gradient-descent procedure for minimax problems

A game theory inspired methodology is proposed for finding a function’s saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player’s optimal strategy into account. The implicit schem...

Full description

Saved in:
Bibliographic Details
Published in:Mathematical methods of operations research (Heidelberg, Germany) Vol. 97; no. 1; pp. 57 - 89
Main Authors: Essid, Montacer, Tabak, Esteban G., Trigila, Giulio
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01-02-2023
Springer Nature B.V
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A game theory inspired methodology is proposed for finding a function’s saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player’s optimal strategy into account. The implicit scheme proposed has an adaptive learning rate that makes it transition to Newton’s method in the neighborhood of saddle points. Convergence is shown through local analysis and through numerical examples in optimal transport and linear programming. An ad-hoc quasi-Newton method is developed for high dimensional problems, for which the inversion of the Hessian of the objective function may entail a high computational cost.
ISSN:1432-2994
1432-5217
DOI:10.1007/s00186-022-00805-w