Abstract
Bayesian variable selection has gained much empirical success recently in a variety of applications when the number K of explanatory variables (x 1, . . . , xK) is possibly much larger than the sample size n. For generalized linear models, if most of the xj's have very small effects on the response y, we show that it is possible to use Bayesian variable selection to reduce overfitting caused by the curse of dimensionality K ≫ n. In this approach a suitable prior can be used to choose a few out of the many xj's to model y, so that the posterior will propose probability densities p that are "often close" to the true density p in some sense. The closeness can be described by a Hellinger distance between p and p that scales at a power very close to n-1/2, which is the "finite-dimensional rate" corresponding to a low-dimensional situation. These findings extend some recent work of Jiang [Technical Report 05-02 (2005) Dept. Statistics, Northwestern Univ.] on consistency of Bayesian variable selection for binary classification.
Original language | English (US) |
---|---|
Pages (from-to) | 1487-1511 |
Number of pages | 25 |
Journal | Annals of Statistics |
Volume | 35 |
Issue number | 4 |
DOIs | |
State | Published - Aug 2007 |
Keywords
- Convergence rates
- Generalized linear models
- High dimensional data
- Posterior distribution
- Prior distribution
- Sparsity
- Variable selection
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty