Fast Robust Model Selection in Large Datasets

Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the numb...

Full description

Saved in:
Bibliographic Details
Published in:Journal of the American Statistical Association Vol. 106; no. 493; pp. 203 - 212
Main Authors: Dupuis, Debbie J., Victoria-Feser, Maria-Pia
Format: Journal Article
Language:English
Published: Alexandria, VA Taylor & Francis 01-03-2011
American Statistical Association
Taylor & Francis Ltd
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the number of covariates. This can be done in a forward selection procedure that includes selecting the variable to enter, deciding to retain it or stop the selection, and estimating the augmented model. Least squares plus t tests can be fast, but the outcome of a forward selection might be suboptimal when there are outliers. In this article we propose a complete algorithm for fast robust model selection, including considerations for huge sample sizes. Because simply replacing the classical statistical criteria with robust ones is not computationally possible, we develop simplified robust estimators, selection criteria, and testing procedures for linear regression. The robust estimator is a one-step weighted M-estimator that can be biased if the covariates are not orthogonal. We show that the bias can be made smaller by iterating the M-estimator one or more steps further. In the variable selection process, we propose a simplified robust criterion based on a robust t statistic that we compare with a false discovery rate-adjusted level. We carry out a simulation study to show the good performance of our approach. We also analyze two datasets and show that the results obtained by our method outperform those from robust least angle regression and random forests. Supplemental materials are available online.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0162-1459
1537-274X
DOI:10.1198/jasa.2011.tm09650