Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

This paper examines a stochastic formulation of the generalized Nash equilibrium problem where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully distributed online learning by agents and employ penalized individual cost functions to deal with...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing Vol. 65; no. 15; pp. 3893 - 3908
Main Authors: Chung-Kai Yu, van der Schaar, Mihaela, Sayed, Ali H.
Format: Journal Article
Language:English
Published: New York IEEE 01-08-2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper examines a stochastic formulation of the generalized Nash equilibrium problem where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(μ max ), for small step-size value μ max and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2017.2695451