Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks

In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activ...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on information theory Vol. 65; no. 2; pp. 742 - 769
Main Authors: Soltanolkotabi, Mahdi, Javanmard, Adel, Lee, Jason D.
Format: Journal Article
Language:English
Published: New York IEEE 01-02-2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2018.2854560