Abstract
School-based evaluations of interventions are increasingly common in education research. Ideally, the results of these evaluations are used to make evidence-based policy decisions for students. However, it is difficult to make generalizations from these evaluations because the types of schools included in the studies are typically not selected randomly from a target population. This paper provides an overview of statistical methods for improving generalizations from intervention research in education. These are presented as a series of steps aimed at improving research design—particularly recruitment—as well as methods for assessing and summarizing generalizability and estimating treatment impacts for clearly defined target populations.
Original language | English (US) |
---|---|
Pages (from-to) | 516-524 |
Number of pages | 9 |
Journal | Educational Researcher |
Volume | 47 |
Issue number | 8 |
DOIs | |
State | Published - Nov 1 2018 |
Keywords
- educational policy
- evaluation
- experimental design
- experimental research
- external validity
- generalizability
- multisite studies
- policy
- program evaluation
- propensity scores
- research methodology
- sampling
- statistics
ASJC Scopus subject areas
- Education