TY - JOUR
T1 - How Consistently Do 13 Clearinghouses Identify Social and Behavioral Development Programs as “Evidence-Based”?
AU - Zheng, Jingwen
AU - Wadhwa, Mansi
AU - Cook, Thomas D.
N1 - Funding Information:
This work was supported by NSF Grant 176458.
Publisher Copyright:
© 2022, Society for Prevention Research.
PY - 2022/11
Y1 - 2022/11
N2 - Clearinghouses develop scientific criteria that they then use to vet existing research studies on a program to reach a verdict about how evidence-based it is. This verdict is then recorded on a website in hopes that stakeholders in science, public policy, the media, and even the general public, will consult it. This paper (1) compares the causal design and analysis preferences of 13 clearinghouses that assess the effectiveness of social and behavioral development programs, (2) estimates how consistently these clearinghouses rank the same program, and then (3) uses case studies to probe why their conclusions differ. Most clearinghouses place their highest value on randomized control trials, but they differ in how they treat program implementation, quasi-experiments, and whether their highest program ratings require effects of a given size that independently replicate or that temporally persist. Of the 2525 social and behavioral development programs sampled over clearinghouses, 82% (n = 2069) were rated by a single clearinghouse. Of the 297 programs rated by two clearinghouses, agreement about program effectiveness was obtained for about 55% (n = 164), but the clearinghouses agreed much more on program ineffectiveness than effectiveness. Most of the inconsistency is due to clearinghouses’ differences in requiring independently replicated and/or temporally sustained effects. Without scientific consensus about matters like these, “evidence-based” will remain more of an aspiration than achievement in the social and behavioral sciences.
AB - Clearinghouses develop scientific criteria that they then use to vet existing research studies on a program to reach a verdict about how evidence-based it is. This verdict is then recorded on a website in hopes that stakeholders in science, public policy, the media, and even the general public, will consult it. This paper (1) compares the causal design and analysis preferences of 13 clearinghouses that assess the effectiveness of social and behavioral development programs, (2) estimates how consistently these clearinghouses rank the same program, and then (3) uses case studies to probe why their conclusions differ. Most clearinghouses place their highest value on randomized control trials, but they differ in how they treat program implementation, quasi-experiments, and whether their highest program ratings require effects of a given size that independently replicate or that temporally persist. Of the 2525 social and behavioral development programs sampled over clearinghouses, 82% (n = 2069) were rated by a single clearinghouse. Of the 297 programs rated by two clearinghouses, agreement about program effectiveness was obtained for about 55% (n = 164), but the clearinghouses agreed much more on program ineffectiveness than effectiveness. Most of the inconsistency is due to clearinghouses’ differences in requiring independently replicated and/or temporally sustained effects. Without scientific consensus about matters like these, “evidence-based” will remain more of an aspiration than achievement in the social and behavioral sciences.
KW - Clearinghouse
KW - Evidence-based
KW - Social and behavioral development programs
UR - http://www.scopus.com/inward/record.url?scp=85137106105&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137106105&partnerID=8YFLogxK
U2 - 10.1007/s11121-022-01407-y
DO - 10.1007/s11121-022-01407-y
M3 - Article
C2 - 36040619
AN - SCOPUS:85137106105
SN - 1389-4986
VL - 23
SP - 1343
EP - 1358
JO - Prevention Science
JF - Prevention Science
IS - 8
ER -