The Trimmed Lasso: Sparsity and Robustness
Nonconvex penalty methods for sparse modeling in linear regression have been a topic of fervent interest in recent years. Herein, we study a family of nonconvex penalty functions that we call the trimmed Lasso and that offers exact control over the desired level of sparsity of estimators. We analyze...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
15-08-2017
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Nonconvex penalty methods for sparse modeling in linear regression have been
a topic of fervent interest in recent years. Herein, we study a family of
nonconvex penalty functions that we call the trimmed Lasso and that offers
exact control over the desired level of sparsity of estimators. We analyze its
structural properties and in doing so show the following:
1) Drawing parallels between robust statistics and robust optimization, we
show that the trimmed-Lasso-regularized least squares problem can be viewed as
a generalized form of total least squares under a specific model of
uncertainty. In contrast, this same model of uncertainty, viewed instead
through a robust optimization lens, leads to the convex SLOPE (or OWL) penalty.
2) Further, in relating the trimmed Lasso to commonly used sparsity-inducing
penalty functions, we provide a succinct characterization of the connection
between trimmed-Lasso- like approaches and penalty functions that are
coordinate-wise separable, showing that the trimmed penalties subsume existing
coordinate-wise separable penalties, with strict containment in general.
3) Finally, we describe a variety of exact and heuristic algorithms, both
existing and new, for trimmed Lasso regularized estimation problems. We include
a comparison between the different approaches and an accompanying
implementation of the algorithms. |
---|---|
DOI: | 10.48550/arxiv.1708.04527 |