Abstract
Background: An important question in the design of experiments is howto ensure that the findings from the experiment are generalizable to a largerpopulation. This concern with generalizability is particularly importantwhen treatment effects are heterogeneous and when selecting units intothe experiment using random sampling is not possible-two conditionscommonly met in large-scale educational experiments. Method: Thisarticle introduces a model-based balanced-sampling framework for improvinggeneralizations, with a focus on developing methods that are robust tomodel misspecification. Additionally, the article provides a new method forsample selection within this framework: First units in an inference populationare divided into relatively homogenous strata using cluster analysis, andthen the sample is selected using distance rankings. Result: In order todemonstrate and evaluate the method, a reanalysis of a completedexperiment is conducted. This example compares samples selected usingthe new method with the actual sample used in the experiment. Resultsindicate that even under high nonresponse, balance is better on mostcovariates and that fewer coverage errors result. Conclusion: The articleconcludes with a discussion of additional benefits and limitations of themethod.
Original language | English (US) |
---|---|
Pages (from-to) | 109-139 |
Number of pages | 31 |
Journal | Evaluation Review |
Volume | 37 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2013 |
Keywords
- cluster analysis
- experimental design
- external validity
- model-based sampling
- stratified sampling
- treatment effect heterogeneity
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- Social Sciences(all)