Fast Robust Model Selection in Large Datasets
Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the numb...
Saved in:
Published in: | Journal of the American Statistical Association Vol. 106; no. 493; pp. 203 - 212 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Alexandria, VA
Taylor & Francis
01-03-2011
American Statistical Association Taylor & Francis Ltd |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the number of covariates. This can be done in a forward selection procedure that includes selecting the variable to enter, deciding to retain it or stop the selection, and estimating the augmented model. Least squares plus t tests can be fast, but the outcome of a forward selection might be suboptimal when there are outliers. In this article we propose a complete algorithm for fast robust model selection, including considerations for huge sample sizes. Because simply replacing the classical statistical criteria with robust ones is not computationally possible, we develop simplified robust estimators, selection criteria, and testing procedures for linear regression. The robust estimator is a one-step weighted M-estimator that can be biased if the covariates are not orthogonal. We show that the bias can be made smaller by iterating the M-estimator one or more steps further. In the variable selection process, we propose a simplified robust criterion based on a robust t statistic that we compare with a false discovery rate-adjusted level. We carry out a simulation study to show the good performance of our approach. We also analyze two datasets and show that the results obtained by our method outperform those from robust least angle regression and random forests. Supplemental materials are available online. |
---|---|
AbstractList | Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the number of covariates. This can be done in a forward selection procedure that includes selecting the variable to enter, deciding to retain it or stop the selection, and estimating the augmented model. Least squares plus t tests can be fast, but the outcome of a forward selection might be suboptimal when there are outliers. In this article we propose a complete algorithm for fast robust model selection, including considerations for huge sample sizes. Because simply replacing the classical statistical criteria with robust ones is not computationally possible, we develop simplified robust estimators, selection criteria, and testing procedures for linear regression. The robust estimator is a one-step weighted M-estimator that can be biased if the covariates are not orthogonal. We show that the bias can be made smaller by iterating the M-estimator one or more steps further. In the variable selection process, we propose a simplified robust criterion based on a robust t statistic that we compare with a false discovery rate-adjusted level. We carry out a simulation study to show the good performance of our approach. We also analyze two datasets and show that the results obtained by our method outperform those from robust least angle regression and random forests. Supplemental materials are available online. [PUBLICATION ABSTRACT] Large dataseis are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the number of covariates. This can be done in a forward selection procedure that includes selecting the variable to enter, deciding to retain it or stop the selection, and estimating the augmented model. Least squares plus t tests can be fast, but the outcome of a forward selection might be suboptimal when there are outliers. In this article we propose a complete algorithm for fast robust model selection, including considerations for huge sample sizes. Because simply replacing the classical statistical criteria with robust ones is not computationally possible, we develop simplified robust estimators, selection criteria, and testing procedures for linear regression. The robust estimator is a one-step weighted M-estimator that can be biased if the covariates are not orthogonal. We show that the bias can be made smaller by iterating the M-estimator one or more steps further. In the variable selection process, we propose a simplified robust criterion based on a robust t statistic that we compare with a false discovery rate-adjusted level. We carry out a simulation study to show the good performance of our approach. We also analyze two datasets and show that the results obtained by our method outperform those from robust least angle regression and random forests. Supplemental materials are available online. Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the number of covariates. This can be done in a forward selection procedure that includes selecting the variable to enter, deciding to retain it or stop the selection, and estimating the augmented model. Least squares plus t tests can be fast, but the outcome of a forward selection might be suboptimal when there are outliers. In this article we propose a complete algorithm for fast robust model selection, including considerations for huge sample sizes. Because simply replacing the classical statistical criteria with robust ones is not computationally possible, we develop simplified robust estimators, selection criteria, and testing procedures for linear regression. The robust estimator is a one-step weighted M-estimator that can be biased if the covariates are not orthogonal. We show that the bias can be made smaller by iterating the M-estimator one or more steps further. In the variable selection process, we propose a simplified robust criterion based on a robust t statistic that we compare with a false discovery rate-adjusted level. We carry out a simulation study to show the good performance of our approach. We also analyze two datasets and show that the results obtained by our method outperform those from robust least angle regression and random forests. Supplemental materials are available online. |
Author | Victoria-Feser, Maria-Pia Dupuis, Debbie J. |
Author_xml | – sequence: 1 givenname: Debbie J. surname: Dupuis fullname: Dupuis, Debbie J. – sequence: 2 givenname: Maria-Pia surname: Victoria-Feser fullname: Victoria-Feser, Maria-Pia |
BackLink | http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=24108007$$DView record in Pascal Francis |
BookMark | eNp1kM1KAzEURoNUsK0-gAthEMTV1GSSTBJwI9WqUBH8AXchk0lkhumkJinStzeltQvBbC7kO_fjckZg0LveAHCK4AQhwa9aFdSkgAhN4gKKksIDMEQUs7xg5GMAhhCVRY4IFUdgFEIL02OcD0E-UyFmL65apfHkatNlr6YzOjauz5o-myv_abJbFVUwMRyDQ6u6YE52cwzeZ3dv04d8_nz_OL2Z55pwEnOrq8oaSC0UVaGILQQVJceWWayESN-cY12IGkPGGIGwqgpeIVZjDSnUCOExuNz2Lr37WpkQ5aIJ2nSd6o1bBckZ5oQRwRJ5_ods3cr36TjJy9RdEi4ShLaQ9i4Eb6xc-mah_FoiKDf65Eaf3OiTO31p52JXrIJWnfWq103YLxYEQZ4cJu5sy7UhOr_PCSKIUkJTfr3Nm946v1Dfzne1jGrdOf9biv8_4wduD46c |
CODEN | JSTNAL |
CitedBy_id | crossref_primary_10_1002_sam_11254 crossref_primary_10_1007_s10994_023_06349_2 crossref_primary_10_1016_j_rse_2016_07_035 crossref_primary_10_1007_s10260_017_0412_0 crossref_primary_10_29220_CSAM_2019_26_6_575 crossref_primary_10_1080_03610918_2019_1588304 crossref_primary_10_1214_12_AOAS584 crossref_primary_10_1002_wics_1439 |
Cites_doi | 10.1214/07-AOS588 10.1214/009053604000000067 10.1214/009053606000000074 10.1016/j.csda.2007.11.006 10.2307/2290858 10.1021/pr900610q 10.1017/S0266466600007775 10.1214/08-AOAS194 10.1198/016214504000000287 10.1198/016214506000000843 10.1198/016214507000000950 10.1214/aos/1176344136 10.1198/016214505000000529 10.1021/pr0603710 10.2307/2965566 10.2307/2285666 10.1214/aoms/1177703732 10.2307/2290914 10.2307/1267380 10.1093/biomet/93.3.491 10.1214/07-AOS586 10.1016/j.csda.2007.01.007 10.1023/A:1010933404324 |
ContentType | Journal Article |
Copyright | 2011 American Statistical Association
2011 2011 American Statistical Association 2015 INIST-CNRS Copyright American Statistical Association Mar 2011 |
Copyright_xml | – notice: 2011 American Statistical Association 2011 – notice: 2011 American Statistical Association – notice: 2015 INIST-CNRS – notice: Copyright American Statistical Association Mar 2011 |
DBID | IQODW AAYXX CITATION 8BJ FQK JBE K9. |
DOI | 10.1198/jasa.2011.tm09650 |
DatabaseName | Pascal-Francis CrossRef International Bibliography of the Social Sciences (IBSS) International Bibliography of the Social Sciences International Bibliography of the Social Sciences ProQuest Health & Medical Complete (Alumni) |
DatabaseTitle | CrossRef International Bibliography of the Social Sciences (IBSS) ProQuest Health & Medical Complete (Alumni) |
DatabaseTitleList | International Bibliography of the Social Sciences (IBSS) International Bibliography of the Social Sciences (IBSS) |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Statistics Mathematics |
EISSN | 1537-274X |
EndPage | 212 |
ExternalDocumentID | 2350834901 10_1198_jasa_2011_tm09650 24108007 41415545 10710904 |
Genre | Theory and Methods Feature |
GroupedDBID | -DZ -~X ..I .7F .QJ 0BK 0R~ 29L 2AX 30N 4.4 5GY 5RE 692 7WY 85S 8FL AAAVI AAAVZ AABCJ AAENE AAJMT AALDU AAMIU AAPUL AAQRR ABBHK ABBKH ABCCY ABEHJ ABFAN ABFIM ABJVF ABLIJ ABLJU ABPEM ABPFR ABPPZ ABQHQ ABTAI ABXUL ABYAD ABYWD ACGFO ACGFS ACGOD ACIWK ACMTB ACNCT ACTIO ACTMH ACTWD ADCVX ADGTB ADLSF ADODI ADULT AEGYZ AEISY AELPN AENEX AEOZL AEPSL AEUPB AEYOC AFFNX AFOLD AFSUE AFVYC AFWLO AFXHP AFXKK AGDLA AGMYJ AHDLD AIHXQ AIJEM AIRXU AKBVH AKOOK ALMA_UNASSIGNED_HOLDINGS ALQZU AQRUH AVBZW BLEHA CCCUG CJ0 CS3 D0L DGEBU DKSSO DQDLB DSRWC DU5 EBS ECEWR EFSUC EJD E~A E~B F5P FJW FUNRP FVPDL GROUPED_ABI_INFORM_COMPLETE GTTXZ H13 HF~ HQ6 HZ~ H~9 H~P IAO IEA IGG IOF IPNFZ IPO J.P JAAYA JAS JBMMH JBZCM JHFFW JKQEH JLEZI JLXEF JMS JPL JSODD JST K60 K6~ KYCEM LU7 M4Z MS~ MW2 N95 NA5 NY~ O9- OFU OK1 P2P RIG RNANH ROSJB RTWRZ RWL RXW S-T SA0 SNACF TAE TEJ TFL TFT TFW TN5 TTHFI U5U UPT UQL UT5 UU3 V1K WH7 WZA XFK YQT YYM ZGOLN ZUP ~S~ AAHBH ABJNI ABPAQ ABRLO ABXSQ ABXYU ACUBG ADACV AGCQS AHDZW ALIPV AWYRJ IPSME JENOY TBQAZ TDBHL TUROJ .-4 .GJ 07G 08R 1OL 3R3 3V. 7X7 88E 88I 8AF 8C1 8FE 8FG 8FI 8FJ 8G5 8R4 8R5 AAFWJ AAIKQ AAKBW ABEFU ABJCF ABUWG ACAGQ ADBBV AEUMN AFKRA AGLEN AGROQ AHMOU AI. ALCKM AMATQ AMXXU AQUVI AZQEC BCCOT BENPR BEZIV BGLVJ BKNYI BKOMP BPHCQ BPLKW BVXVI C06 CCPQU CRFIH DMQIW DWIFK DWQXO E.L F20 FEDTE FRNLG FVMVE FYUFA GNUQQ GROUPED_ABI_INFORM_RESEARCH GUQSH HCIFZ HGD IQODW IVXBP K9- KQ8 L6V LJTGL M0C M0R M0T M1P M2O M2P M7S MVM NHB NUSFT P-O PADUT PQBIZ PQQKQ PRG PROAC PSQYO PTHSS Q2X QCRFL RNS S0X SJN TAQ TFMCV TOXWX UB9 UKHRP VH1 VOH WHG YXB YYP ZCG ZGI ZXP AAYXX ABPQH ACGEE ADMHG CITATION HMCUK HVGLF PQBZA 8BJ FQK JBE K9. |
ID | FETCH-LOGICAL-c484t-fcbbfe05f09b2a4f2959683f7f3a995f0883c29d30777400bb28b17d3c050c113 |
IEDL.DBID | TFW |
ISSN | 0162-1459 |
IngestDate | Fri Oct 25 23:53:15 EDT 2024 Tue Nov 19 07:09:37 EST 2024 Thu Nov 21 21:52:31 EST 2024 Sun Oct 22 16:08:44 EDT 2023 Tue Aug 20 02:53:00 EDT 2024 Tue Jul 04 18:15:43 EDT 2023 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 493 |
Keywords | Rank statistic Correlation Forests Statistical distribution Sample size Bias Estimator robustness M-estimator Multivariate analysis Covariate Statistical simulation Parametric method Outlier Statistical test Least squares method Least angle regression Selection method Approximation theory Fast algorithm Sample survey Variable selection Discriminant analysis Statistical analysis Linear regression Model selection Statistical association Statistical estimation Random forests T statistic Statistical method Statistical regression Selection problem Sampling theory Correlation analysis Multicollinearity False discovery rate Robust t test M estimation Biased estimation Partial correlation |
Language | English |
License | CC BY 4.0 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c484t-fcbbfe05f09b2a4f2959683f7f3a995f0883c29d30777400bb28b17d3c050c113 |
Notes | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
OpenAccessLink | https://access.archive-ouverte.unige.ch/access/metadata/14cd9aa6-ac40-4eb6-b348-50222672a4dd/download |
PQID | 867406489 |
PQPubID | 41715 |
PageCount | 10 |
ParticipantIDs | proquest_journals_867406489 jstor_primary_41415545 pascalfrancis_primary_24108007 informaworld_taylorfrancis_310_1198_jasa_2011_tm09650 proquest_miscellaneous_873847497 crossref_primary_10_1198_jasa_2011_tm09650 |
PublicationCentury | 2000 |
PublicationDate | 2011-03-01 |
PublicationDateYYYYMMDD | 2011-03-01 |
PublicationDate_xml | – month: 03 year: 2011 text: 2011-03-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | Alexandria, VA |
PublicationPlace_xml | – name: Alexandria, VA – name: Alexandria |
PublicationTitle | Journal of the American Statistical Association |
PublicationYear | 2011 |
Publisher | Taylor & Francis American Statistical Association Taylor & Francis Ltd |
Publisher_xml | – name: Taylor & Francis – name: American Statistical Association – name: Taylor & Francis Ltd |
References | p_27 p_28 p_24 p_25 Tibshirani R. (p_31) 1996; 58 p_20 p_21 p_22 Akaike H. (p_2) 1973 p_16 p_18 p_1 p_19 p_4 p_3 p_13 p_6 p_14 p_8 p_7 p_9 Benjamini Y. (p_5) 1995; 57 Rousseeuw P. J. (p_29) 1984 p_30 p_10 p_32 p_11 |
References_xml | – ident: p_3 doi: 10.1214/07-AOS588 – ident: p_9 doi: 10.1214/009053604000000067 – ident: p_1 doi: 10.1214/009053606000000074 – ident: p_20 doi: 10.1016/j.csda.2007.11.006 – ident: p_27 doi: 10.2307/2290858 – ident: p_8 doi: 10.1021/pr900610q – ident: p_21 doi: 10.1017/S0266466600007775 – volume: 57 start-page: 289 year: 1995 ident: p_5 publication-title: Journal of the Royal Statistical Society, Ser. B contributor: fullname: Benjamini Y. – volume: 58 start-page: 267 year: 1996 ident: p_31 publication-title: Journal of the Royal Statistical Society, Ser. B contributor: fullname: Tibshirani R. – ident: p_4 doi: 10.1214/08-AOAS194 – ident: p_10 doi: 10.1198/016214504000000287 – ident: p_32 doi: 10.1198/016214506000000843 – ident: p_19 doi: 10.1198/016214507000000950 – ident: p_30 doi: 10.1214/aos/1176344136 – ident: p_24 doi: 10.1198/016214505000000529 – ident: p_25 doi: 10.1021/pr0603710 – ident: p_28 doi: 10.2307/2965566 – ident: p_13 doi: 10.2307/2285666 – ident: p_16 doi: 10.1214/aoms/1177703732 – start-page: 267 year: 1973 ident: p_2 publication-title: B. N. Petrov and F. Csaki, Budapest: Akademiai Kiado, pp. contributor: fullname: Akaike H. – ident: p_14 doi: 10.2307/2290914 – ident: p_22 doi: 10.2307/1267380 – start-page: 256 year: 1984 ident: p_29 publication-title: J. Franke, W. Härdle, and R. D. Martin, New York: Springer-Verlag, pp. contributor: fullname: Rousseeuw P. J. – ident: p_6 doi: 10.1093/biomet/93.3.491 – ident: p_11 doi: 10.1214/07-AOS586 – ident: p_18 doi: 10.1016/j.csda.2007.01.007 – ident: p_7 doi: 10.1023/A:1010933404324 |
SSID | ssj0000788 |
Score | 2.0892217 |
Snippet | Large datasets are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of... Large dataseis are increasingly common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of... |
SourceID | proquest crossref pascalfrancis jstor informaworld |
SourceType | Aggregation Database Index Database Publisher |
StartPage | 203 |
SubjectTerms | Applications Applied statistics Computational methods Correlations Criteria Data analysis Data processing Datasets Estimation bias Estimators Exact sciences and technology False discovery rate General topics Least angle regression Linear inference, regression Linear regression M-estimator Mathematics Modeling Multicollinearity Outliers Parameter estimation Parametric inference Partial correlation Probability and statistics Random forests Random variables Regression analysis Robust t test Sciences and techniques of general use Statistical analysis Statistics Theory and Methods |
SummonAdditionalLinks | – databaseName: JSTOR Arts and Sciences I Collection dbid: JAS link: http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwEB11OXEp0IJIgVUO7QXJwomdZHxEwGoPqIfSStwsO7GlVpBFJPv_O3aSLStUxClS4ny9sT3P9vgNwFeUDnnuG-Z94ZlEUzL0pWeobFahkplzYaPw8q76fo_XN0Em59u0FyaEVca4wLiKTwTJPrgLmUW3V8xghtmogvuvu61ickmiLjnLZKHGpUsaTF_8MZ0ZVDr7x6Bywrecz5Y06RSOGGIjTUfw-CGvxasuOvqdxd47v3gfPo7EMr0casIBfHDtJ9gNXHKQYv4MbGG6Pv2xsms6hBxoD-ldzIJDpkl_t-ltCApPr01Pjq3vDuHX4ubn1ZKN2RJYLVH2zNfWescLz5XNjfS5KlSJwldeGKXoNKKoc9VQoybKx7m1OZI9GlHzgtdZJo5gp1217hhSKiw8BilCIliGHsitUEbaouaON7ZM4HyCUT8Nohg6DiYU6oC5DpjrEfMEipdA6z7ORIzwavHGfUcR1s0bJkwTmG-ZaFOAyEjgv1UCJ5PN9NgiO40l_XQpUSWQbq5SUwrrI6Z1qzUVqQT5aqmqL_979QnsDnPKIQbtFHb657U7g1nXrOexUv4FLdrcMg priority: 102 providerName: JSTOR |
Title | Fast Robust Model Selection in Large Datasets |
URI | https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm09650 https://www.jstor.org/stable/41415545 https://www.proquest.com/docview/867406489 https://search.proquest.com/docview/873847497 |
Volume | 106 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nb9QwEB2VnnqB8lERWlY5cEKycGInGR96qNquekAcaKtys-zElkCQrUj2_3fGyW5ZgTjAKVLiOPGMPfNsj98AvEMdUJaxEzFWUWh0tcBYR4HGFw0aXYTAB4WvrptPX_DikmlyTjdnYTiskufQcSKKSLaaB7fzcwYSgx--ucFN5JvjDyYv4Rk7821Tb75Z3j3a4SZlnSRMUwp6bOY9zT_WsOOVdjhLN3GKHDTpBpJbnBJe_Ga7k0NaPvvPphzC0xmJ5mdT13kOe6F_AQcMPifu5pcglm4Y888rv6YLJ037nl-ntDmky_xrn3_kKPL8wo3kCcfhFdwuL2_Or8ScXkG0GvUoYut9DLKK0vjS6ViaytSoYhOVM4ZuI6q2NB1ZAcKIUnpfIimwU62sZFsU6gj2-1UfXkNOhVVE5i4kROaoQumVcdpXrQyy83UG7zfitfcTi4ZNsw-DlkVgWQR2FkEG1a8KsGNaupjFbtVf3jtKmtp-QRcJLVUZLHZUty1A6IUBc5PB8UaXdh7Cg8WaGl1rNBnk26c09nhDxfVhtaYijSLnrk3z5h9_-RgOpjVqjmk7gf3x5zq8hSdDt14kqudF6tEPElT27Q |
link.rule.ids | 315,782,786,817,1455,1509,27933,27934,58021,58023,58256,59734,60523 |
linkProvider | Taylor & Francis |
linkToHtml | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3BbtQwEB3RcmgvtEArQmnJoSckq07sJPYFqaJdLWLpoV1Eb5ad2FIRZKsm-__MONmFFagHOEVKHCee8Yyf7fEbgFMlveJ5aFgIRWBS2ZKpUAamtMsqpWXmPR0Unt5UV7fq4pJoct6vzsJQWCXNocNAFBF9NRk3LUYPFq7V2Tfb2YF9s_9B7CU4ZX-KwFgQd_588vWXJ65i3klENTnLZKHHXc2_VrExLm2wlq4iFSls0nYouTCkvPjDe8chabL3v43Zh2cjGE3Ph97zHJ749gXsEv4c6JtfApvYrk-vF26JF8qb9j29iZlzUJ3pXZvOKJA8vbA9DoZ9dwBfJpfzD1M2ZlhgtVSyZ6F2LnheBK5dbmXIdaFLJUIVhNUabysl6lw36AgQJnLuXK5Qh42oecHrLBOHsN0uWv8KUiwsgiL6QgRlFivkTmgrXVFzzxtXJvBuJV9zPxBpmDgB0cqQCAyJwIwiSKD4XQOmj6sXo9yNeOS9w6iq9RdkFgFTkcDJhu7WBRDAEGauEjhaKdOMVtwZVWKjS6l0Aun6KZof7anY1i-WWKQSOL5LXb3-x19-CzvT-eeZmX28-nQEu8OSNYW4vYHt_mHpj2Gra5YnsWP_BMOh-hI |
linkToPdf | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3Nb9UwDLdgSGiXMT4mytjogRNSRNqkrXPggHirhpgmxIbgFiVtIoG2von2_f9zkr4HTyAOcKrUpmljx_bPiWMDvETpkJe-Z95Xnkk0NUNfe4bKFg0qWTgXDgqfXjTnX3FxEtLkvFmfhQlhlcGH9ilRRNTVQbhvep8EXOHr72Y0KfnmdB2Sl5DHfq_myaO6bL_8VMRNLDtJoKZkhazUvKn5xy62zNJW0tJ1oGKImjQjEc6nihe_Ke9okdoH_zmWfdiboWj-Ns2dh3DHDY9gN6DPlLz5MbDWjFP-aWlXdAlV067yi1g3h5iZfxvysxBGni_MRKZwGp_A5_bk8t0pm-srsE6inJjvrPWOV54rWxrpS1WpGoVvvDBK0W1E0ZWqJzVAIJFza0skDvai4xXvikIcwM6wHNxTyKmx8BiSFxIkM9Qht0IZaauOO97bOoNXa_Lqm5RGQ0f3Q6EOJNCBBHomQQbVrwzQU1y7mMmuxV_eO4ic2nxBFhEuVRkcb7Fu04DgS0DMTQaHa17qWYZHjTUNupaoMsg3T0n4wo6KGdxyRU0aQdZdqubZP_7yC7j_cdHqs_fnHw5hN61Xh_i257Az_Vi5I7g79qvjOK1vAbvi-LY |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Fast+Robust+Model+Selection+in+Large+Datasets&rft.jtitle=Journal+of+the+American+Statistical+Association&rft.au=Dupuis%2C+Debbie+J.&rft.au=Victoria-Feser%2C+Maria-Pia&rft.date=2011-03-01&rft.pub=Taylor+%26+Francis&rft.issn=0162-1459&rft.eissn=1537-274X&rft.volume=106&rft.issue=493&rft.spage=203&rft.epage=212&rft_id=info:doi/10.1198%2Fjasa.2011.tm09650&rft.externalDocID=10710904 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-1459&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-1459&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-1459&client=summon |