Abstract
Clearinghouses set standards of scientific quality to vet existing research to determine how “evidence-based” an intervention is. This paper examines 12 educational clearinghouses to describe their effectiveness criteria, to estimate how consistently they rate the same program, and to probe why their judgments differ. All the clearinghouses value random assignment, but they differ in how they treat its implementation, how they weight quasi-experiments, and how they value ancillary causal factors like independent replication and persisting effects. A total of 1359 programs were analyzed over 10 clearinghouses; 83% of them were assessed by a single clearinghouse and, of those rated by more than one, similar ratings were achieved for only about 30% of the programs. This high level of inconsistency seems to be mostly due to clearinghouses disagreeing about whether a high program rating requires effects that are replicated and/or temporally persisting. Clearinghouses exist to identify “evidence-based” programs, but the inconsistency in their recommendations of the same program suggests that identifying “evidence-based” interventions is still more of a policy aspiration than a reliable research practice.
Original language | English (US) |
---|---|
Pages (from-to) | 3-32 |
Number of pages | 30 |
Journal | Review of Educational Research |
Volume | 94 |
Issue number | 1 |
DOIs | |
State | Published - Feb 2024 |
Keywords
- case studies
- causal identification
- clearinghouse
- descriptive analysis
- evaluation
- evidence-based
- experimental research
- mixed methods
- policy
- policy analysis
- program evaluation
- research methodology
- research utilization
- validity/reliability
- what works
ASJC Scopus subject areas
- Education