Comparison of variable selection methods for clinical predictive modeling

L. Nelson Sanchez-Pinto, Laura Ruth Venable, John Fahrenbach, Matthew M. Churpek*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

161 Scopus citations

Abstract

Objective: Modern machine learning-based modeling methods are increasingly applied to clinical problems. One such application is in variable selection methods for predictive modeling. However, there is limited research comparing the performance of classic and modern for variable selection in clinical datasets. Materials and Methods: We analyzed the performance of eight different variable selection methods: four regression-based methods (stepwise backward selection using p-value and AIC, Least Absolute Shrinkage and Selection Operator, and Elastic Net) and four tree-based methods (Variable Selection Using Random Forest, Regularized Random Forests, Boruta, and Gradient Boosted Feature Selection). We used two clinical datasets of different sizes, a multicenter adult clinical deterioration cohort and a single center pediatric acute kidney injury cohort. Method evaluation included measures of parsimony, variable importance, and discrimination. Results: In the large, multicenter dataset, the modern tree-based Variable Selection Using Random Forest and the Gradient Boosted Feature Selection methods achieved the best parsimony. In the smaller, single-center dataset, the classic regression-based stepwise backward selection using p-value and AIC methods achieved the best parsimony. In both datasets, variable selection tended to decrease the accuracy of the random forest models and increase the accuracy of logistic regression models. Conclusions: The performance of classic regression-based and modern tree-based variable selection methods is associated with the size of the clinical dataset used. Classic regression-based variable selection methods seem to achieve better parsimony in clinical prediction problems in smaller datasets while modern tree-based methods perform better in larger datasets.

Original languageEnglish (US)
Pages (from-to)10-17
Number of pages8
JournalInternational Journal of Medical Informatics
Volume116
DOIs
StatePublished - Aug 2018

Keywords

  • Data interpretation
  • Electronic health records
  • Machine learning
  • Models
  • Regression analysis
  • Statistical
  • Variable selection

ASJC Scopus subject areas

  • Health Informatics

Fingerprint

Dive into the research topics of 'Comparison of variable selection methods for clinical predictive modeling'. Together they form a unique fingerprint.

Cite this