Calculating the true level of predictors significance when carrying out the procedure of regression equation specification
The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression models predictors by incorporating the number of potential explanatory variables, their variance-covariance matrix and its uncertainty, based on the number of observations. This adju...
Saved in:
Published in: | Статистика и экономика no. 3; pp. 10 - 20 |
---|---|
Main Author: | |
Format: | Journal Article |
Language: | English |
Published: |
Plekhanov Russian University of Economics
01-07-2017
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression models predictors by incorporating the number of potential explanatory variables, their variance-covariance matrix and its uncertainty, based on the number of observations. This adjustment helps to control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Also during the comparative analysis it was shown that when the variance-covariance matrix of a set of potential predictors is diagonal, i.e. the data are independent, the proposed simple correction is the best and easiest way to implement the method to obtain unbiased corrections of traditional p-values. However, in the case of the presence of strongly correlated data, a simple correction overestimates the true pvalues, which can lead to type II errors. It was also found that the corrected p-values depend on the number of observations, the number of potential explanatory variables and the sample variance-covariance matrix. For example, if there are only two potential explanatory variables competing for one position in the regression model, then if they are weakly correlated, the corrected p-value will be lower than when the number of observations is smaller and vice versa; if the data are highly correlated, the case with a larger number of observations will show a lower corrected p-value. With increasing correlation, all corrections, regardless of the number of observations, tend to the original p-value. This phenomenon is easy to explain: as correlation coefficient tends to one, two variables almost linearly depend on each other, and in case if one of them is significant, the other will almost certainly show the same significance. On the other hand, if the sample variance-covariance matrix tends to be diagonal and the number of observations tends to infinity, the proposed numerical method will return corrections close to the simple correction. In the case when the number of observations is much greater than the number of potential predictors, then the Shehata and White corrections give approximately the same corrections with the proposed numerical method. However, in much more common cases, when the number of observations is comparable to the number of potential predictors, the existing methods demonstrate significant inaccuracies. When the number of potential predictors is greater than the available number of observations, it seems impossible to calculate the true p-values. Therefore, it is recommended not to consider such datasets when constructing regression models, since only the fulfillment of the above condition ensures calculation of unbiased p-value corrections. The proposed method is easy to program and can be integrated into any statistical software package. |
---|---|
ISSN: | 2500-3925 |
DOI: | 10.21686/2500-3925-2017-3-10-20 |