Implications of nine risk prediction models for selecting ever-smokers for computed tomography lung cancer screening

Hormuzd A. Katki*, Stephanie A. Kovalchik, Lucia Catherine Petito, Li C. Cheung, Eric Jacobs, Ahmedin Jemal, Christine D. Berg, Anil K. Chaturvedi

*Corresponding author for this work

Research output: Contribution to journalArticle

17 Citations (Scopus)

Abstract

Background: Lung cancer screening guidelines recommend using individualized risk models to refer ever-smokers for screening. However, different models select different screening populations. The performance of each model in selecting ever-smokers for screening is unknown. Objective: To compare the U.S. screening populations selected by 9 lung cancer risk models (the Bach model; the Spitz model; the Liverpool Lung Project [LLP] model; the LLP Incidence Risk Model [LLPi]; the Hoggart model; the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 [PLCOM2012]; the Pittsburgh Predictor; the Lung Cancer Risk Assessment Tool [LCRAT]; and the Lung Cancer Death Risk Assessment Tool [LCDRAT]) and to examine their predictive performance in 2 cohorts. Design: Population-based prospective studies. Setting: United States. Participants: Models selected U.S. screening populations by using data from the National Health Interview Survey from 2010 to 2012. Model performance was evaluated using data from 337 388 ever-smokers in the National Institutes of Health–AARP Diet and Health Study and 72 338 ever-smokers in the CPS-II (Cancer Prevention Study II) Nutrition Survey cohort. Measurements: Model calibration (ratio of model-predicted to observed cases [expected–observed ratio]) and discrimination (area under the curve [AUC]). Results: At a 5-year risk threshold of 2.0%, the models chose U.S. screening populations ranging from 7.6 million to 26 million ever-smokers. These disagreements occurred because, in both validation cohorts, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated (expected–observed ratio range, 0.92 to 1.12) and had higher AUCs (range, 0.75 to 0.79) than 5 models that generally overestimated risk (expected–observed ratio range, 0.83 to 3.69) and had lower AUCs (range, 0.62 to 0.75). The 4 best-performing models also had the highest sensitivity at a fixed specificity (and vice versa) and similar discrimination at a fixed risk threshold. These models showed better agreement on size of the screening population (7.6 million to 10.9 million) and achieved consensus on 73% of persons chosen. Limitation: No consensus on risk thresholds for screening. Conclusion: The 9 lung cancer risk models chose widely differing U.S. screening populations. However, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) most accurately predicted risk and performed best in selecting ever-smokers for screening.

Original languageEnglish (US)
Pages (from-to)10-19
Number of pages10
JournalAnnals of internal medicine
Volume169
Issue number1
DOIs
StatePublished - Jul 3 2018

Fingerprint

Early Detection of Cancer
Lung Neoplasms
Tomography
Ovarian Neoplasms
Population
Area Under Curve
Colorectal Neoplasms
Prostatic Neoplasms
Lung
Nutrition Surveys
Population Density
Health Surveys
Calibration
Odds Ratio
Prospective Studies
Guidelines
Interviews
Diet

ASJC Scopus subject areas

  • Internal Medicine

Cite this

Katki, Hormuzd A. ; Kovalchik, Stephanie A. ; Petito, Lucia Catherine ; Cheung, Li C. ; Jacobs, Eric ; Jemal, Ahmedin ; Berg, Christine D. ; Chaturvedi, Anil K. / Implications of nine risk prediction models for selecting ever-smokers for computed tomography lung cancer screening. In: Annals of internal medicine. 2018 ; Vol. 169, No. 1. pp. 10-19.
@article{a777f640564c464b8c1d051162ed1789,
title = "Implications of nine risk prediction models for selecting ever-smokers for computed tomography lung cancer screening",
abstract = "Background: Lung cancer screening guidelines recommend using individualized risk models to refer ever-smokers for screening. However, different models select different screening populations. The performance of each model in selecting ever-smokers for screening is unknown. Objective: To compare the U.S. screening populations selected by 9 lung cancer risk models (the Bach model; the Spitz model; the Liverpool Lung Project [LLP] model; the LLP Incidence Risk Model [LLPi]; the Hoggart model; the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 [PLCOM2012]; the Pittsburgh Predictor; the Lung Cancer Risk Assessment Tool [LCRAT]; and the Lung Cancer Death Risk Assessment Tool [LCDRAT]) and to examine their predictive performance in 2 cohorts. Design: Population-based prospective studies. Setting: United States. Participants: Models selected U.S. screening populations by using data from the National Health Interview Survey from 2010 to 2012. Model performance was evaluated using data from 337 388 ever-smokers in the National Institutes of Health–AARP Diet and Health Study and 72 338 ever-smokers in the CPS-II (Cancer Prevention Study II) Nutrition Survey cohort. Measurements: Model calibration (ratio of model-predicted to observed cases [expected–observed ratio]) and discrimination (area under the curve [AUC]). Results: At a 5-year risk threshold of 2.0{\%}, the models chose U.S. screening populations ranging from 7.6 million to 26 million ever-smokers. These disagreements occurred because, in both validation cohorts, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated (expected–observed ratio range, 0.92 to 1.12) and had higher AUCs (range, 0.75 to 0.79) than 5 models that generally overestimated risk (expected–observed ratio range, 0.83 to 3.69) and had lower AUCs (range, 0.62 to 0.75). The 4 best-performing models also had the highest sensitivity at a fixed specificity (and vice versa) and similar discrimination at a fixed risk threshold. These models showed better agreement on size of the screening population (7.6 million to 10.9 million) and achieved consensus on 73{\%} of persons chosen. Limitation: No consensus on risk thresholds for screening. Conclusion: The 9 lung cancer risk models chose widely differing U.S. screening populations. However, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) most accurately predicted risk and performed best in selecting ever-smokers for screening.",
author = "Katki, {Hormuzd A.} and Kovalchik, {Stephanie A.} and Petito, {Lucia Catherine} and Cheung, {Li C.} and Eric Jacobs and Ahmedin Jemal and Berg, {Christine D.} and Chaturvedi, {Anil K.}",
year = "2018",
month = "7",
day = "3",
doi = "10.7326/M17-2701",
language = "English (US)",
volume = "169",
pages = "10--19",
journal = "Annals of Internal Medicine",
issn = "0003-4819",
publisher = "American College of Physicians",
number = "1",

}

Katki, HA, Kovalchik, SA, Petito, LC, Cheung, LC, Jacobs, E, Jemal, A, Berg, CD & Chaturvedi, AK 2018, 'Implications of nine risk prediction models for selecting ever-smokers for computed tomography lung cancer screening', Annals of internal medicine, vol. 169, no. 1, pp. 10-19. https://doi.org/10.7326/M17-2701

Implications of nine risk prediction models for selecting ever-smokers for computed tomography lung cancer screening. / Katki, Hormuzd A.; Kovalchik, Stephanie A.; Petito, Lucia Catherine; Cheung, Li C.; Jacobs, Eric; Jemal, Ahmedin; Berg, Christine D.; Chaturvedi, Anil K.

In: Annals of internal medicine, Vol. 169, No. 1, 03.07.2018, p. 10-19.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Implications of nine risk prediction models for selecting ever-smokers for computed tomography lung cancer screening

AU - Katki, Hormuzd A.

AU - Kovalchik, Stephanie A.

AU - Petito, Lucia Catherine

AU - Cheung, Li C.

AU - Jacobs, Eric

AU - Jemal, Ahmedin

AU - Berg, Christine D.

AU - Chaturvedi, Anil K.

PY - 2018/7/3

Y1 - 2018/7/3

N2 - Background: Lung cancer screening guidelines recommend using individualized risk models to refer ever-smokers for screening. However, different models select different screening populations. The performance of each model in selecting ever-smokers for screening is unknown. Objective: To compare the U.S. screening populations selected by 9 lung cancer risk models (the Bach model; the Spitz model; the Liverpool Lung Project [LLP] model; the LLP Incidence Risk Model [LLPi]; the Hoggart model; the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 [PLCOM2012]; the Pittsburgh Predictor; the Lung Cancer Risk Assessment Tool [LCRAT]; and the Lung Cancer Death Risk Assessment Tool [LCDRAT]) and to examine their predictive performance in 2 cohorts. Design: Population-based prospective studies. Setting: United States. Participants: Models selected U.S. screening populations by using data from the National Health Interview Survey from 2010 to 2012. Model performance was evaluated using data from 337 388 ever-smokers in the National Institutes of Health–AARP Diet and Health Study and 72 338 ever-smokers in the CPS-II (Cancer Prevention Study II) Nutrition Survey cohort. Measurements: Model calibration (ratio of model-predicted to observed cases [expected–observed ratio]) and discrimination (area under the curve [AUC]). Results: At a 5-year risk threshold of 2.0%, the models chose U.S. screening populations ranging from 7.6 million to 26 million ever-smokers. These disagreements occurred because, in both validation cohorts, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated (expected–observed ratio range, 0.92 to 1.12) and had higher AUCs (range, 0.75 to 0.79) than 5 models that generally overestimated risk (expected–observed ratio range, 0.83 to 3.69) and had lower AUCs (range, 0.62 to 0.75). The 4 best-performing models also had the highest sensitivity at a fixed specificity (and vice versa) and similar discrimination at a fixed risk threshold. These models showed better agreement on size of the screening population (7.6 million to 10.9 million) and achieved consensus on 73% of persons chosen. Limitation: No consensus on risk thresholds for screening. Conclusion: The 9 lung cancer risk models chose widely differing U.S. screening populations. However, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) most accurately predicted risk and performed best in selecting ever-smokers for screening.

AB - Background: Lung cancer screening guidelines recommend using individualized risk models to refer ever-smokers for screening. However, different models select different screening populations. The performance of each model in selecting ever-smokers for screening is unknown. Objective: To compare the U.S. screening populations selected by 9 lung cancer risk models (the Bach model; the Spitz model; the Liverpool Lung Project [LLP] model; the LLP Incidence Risk Model [LLPi]; the Hoggart model; the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 [PLCOM2012]; the Pittsburgh Predictor; the Lung Cancer Risk Assessment Tool [LCRAT]; and the Lung Cancer Death Risk Assessment Tool [LCDRAT]) and to examine their predictive performance in 2 cohorts. Design: Population-based prospective studies. Setting: United States. Participants: Models selected U.S. screening populations by using data from the National Health Interview Survey from 2010 to 2012. Model performance was evaluated using data from 337 388 ever-smokers in the National Institutes of Health–AARP Diet and Health Study and 72 338 ever-smokers in the CPS-II (Cancer Prevention Study II) Nutrition Survey cohort. Measurements: Model calibration (ratio of model-predicted to observed cases [expected–observed ratio]) and discrimination (area under the curve [AUC]). Results: At a 5-year risk threshold of 2.0%, the models chose U.S. screening populations ranging from 7.6 million to 26 million ever-smokers. These disagreements occurred because, in both validation cohorts, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated (expected–observed ratio range, 0.92 to 1.12) and had higher AUCs (range, 0.75 to 0.79) than 5 models that generally overestimated risk (expected–observed ratio range, 0.83 to 3.69) and had lower AUCs (range, 0.62 to 0.75). The 4 best-performing models also had the highest sensitivity at a fixed specificity (and vice versa) and similar discrimination at a fixed risk threshold. These models showed better agreement on size of the screening population (7.6 million to 10.9 million) and achieved consensus on 73% of persons chosen. Limitation: No consensus on risk thresholds for screening. Conclusion: The 9 lung cancer risk models chose widely differing U.S. screening populations. However, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) most accurately predicted risk and performed best in selecting ever-smokers for screening.

UR - http://www.scopus.com/inward/record.url?scp=85049736806&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85049736806&partnerID=8YFLogxK

U2 - 10.7326/M17-2701

DO - 10.7326/M17-2701

M3 - Article

C2 - 29800127

AN - SCOPUS:85049736806

VL - 169

SP - 10

EP - 19

JO - Annals of Internal Medicine

JF - Annals of Internal Medicine

SN - 0003-4819

IS - 1

ER -