Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many...
Saved in:
Published in: | Neural networks Vol. 84; pp. 28 - 38 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
United States
Elsevier Ltd
01-12-2016
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0<p<1). The back side of these approaches is increase in computational cost for optimization. Till so far, no approaches have been suggested to deal with arbitrary error functionals, in a flexible and computationally efficient framework. In this paper, we develop a theory and basic universal data approximation algorithms (k-means, principal components, principal manifolds and graphs, regularized and sparse regression), based on piece-wise quadratic error potentials of subquadratic growth (PQSQ potentials). We develop a new and universal framework to minimize arbitrary sub-quadratic error potentials using an algorithm with guaranteed fast convergence to the local or global error minimum. The theory of PQSQ potentials is based on the notion of the cone of minorant functions, and represents a natural approximation formalism based on the application of min-plus algebra. The approach can be applied in most of existing machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods, having similar or better approximation accuracy.
•The quadratic error functionals demonstrate many weaknesses for complex data.•The back side of the non-quadratic error functionals is cost for optimization.•New algorithms use Piece-wise Quadratic potentials of SubQuadratic growth (PQSQ).•PQSQ-based algorithms are as fast as the fast heuristic methods but more accurate.•PQSQ-based algorithms are computationally efficient for regularized sparse regression. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0893-6080 1879-2782 |
DOI: | 10.1016/j.neunet.2016.08.007 |