Abstract
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the SE is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative symptoms), and the bank may lack depth for many individuals. In such cases, the predicted standard error reduction (PSER) stopping rule will stop the CAT even if the SE threshold has not been reached and can avoid administering excessive questions that provide little additional information. By tuning the parameters of the PSER algorithm, a practitioner can specify a desired tradeoff between accuracy and efficiency. Using simulated data for the Patient-Reported Outcomes Measurement Information System Anxiety and Physical Function banks, we demonstrate that these parameters can substantially impact CAT performance. When the parameters were optimally tuned, the PSER stopping rule was found to outperform the SE stopping rule overall, particularly for individuals not targeted by the bank, and presented roughly the same number of items across the trait continuum. Therefore, the PSER stopping rule provides an effective method for balancing the precision and efficiency of a CAT.
Original language | English (US) |
---|---|
Pages (from-to) | 146-168 |
Number of pages | 23 |
Journal | International Journal of Testing |
Volume | 20 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2 2020 |
Funding
This work was supported by National Library of Medicine grants R01LM011962 and R01LM011663.
Keywords
- computer adaptive testing
- item response theory
- patient-reported outcomes
- stopping rule
ASJC Scopus subject areas
- Social Psychology
- Education
- Modeling and Simulation