Project Details
Description
A central problem in educational policy is the identification of effective educational interventions. Great emphasis is rightly placed on the use of randomized experiments in the study of effectiveness; but when randomized experiments are not feasible or ethical, a variety of quasi-experimental designs are used to estimate effects. One of these, the single-case design (SCD), is the focus of the proposed research. SCDs are used in such education-related fields as autism, learning disorders, school psychology, special education, developmental disorders, remedial education, early interventions, exceptional children, behavioral problems, and speech, language and hearing research (Shadish & Sullivan, 2011; Smith, 2012). The What Works Clearinghouse (WWC) and the APA Division 16 Task Force on Evidence Based Interventions in School Psychology identified SCDs as acceptable designs for evidence based practice reviews (Kratochwill et al., 2010; Kratochwill & Stoiber, 2002). The National Center for Special Education Research in the U.S. Institute of Education Sciences (IES) allows SCDs to be used instead of randomized experiments for efficacy studies under some conditions. Consistent with this interest, IES supports research to develop appropriate analytic methods for SCDs.
In the proposed research, we will continue to develop effect size estimators for SCDs that are in the same metric as those from between-groups designs, but will extend them to (a) diverse outcome metrics and (b) complete our work on effect sizes for situations where trend may exist. Regarding trend, the proposed research will extend our work on effect sizes in normally distributed data with trends by developing estimators where the small sample distribution can be developed conditional on nuisance parameters, then evaluating their properties by simulation when nuisance parameters must be estimated, and comparing their performance with large sample methods developed in the previous grant. Regarding outcome metrics, we will extend our work to non-normally distributed outcomes. These may be developed initially in terms of other effect sizes like an odds or rate ratio (which can be transformed into a corresponding d via existing methods when circumstances require).
No current analytic method accomplishes these goals. The d-statistic we developed in our previous grant, for example, explicitly assumes normality and no trend. The effect size estimators in the “overlap” tradition (Parker, Vannest, & Davis, 2011) assume no autocorrelation, and all but one assume no trend. Even the best alternatives suffer from crucial problems. Consider, for instance, problems with the Swaminathan, Rogers & Horner (2014) d-statistic. For instance, the Maggin et al. (2011) d-statistic can account for autocorrelation, and can account for trend. However, when dealing with count or rate outcomes, it computes a standardized mean difference (d) statistic that is standardized by the (often much smaller) within cases standard deviation rather than the (usually much larger) between cases standard deviation always used in BSDs.
The RFA for this competition asks for justification that “there are theoretical and empirical justifications for expecting the method to function as planned”. We apply standard tools of mathematical and applied statistics in our work. First we posit a model for the data which is one time series for each individual, then identify what could be (given suitable assignment) a randomized experiment that is a subset of the data. This subset identifies a natural effect size parameter in
Status | Finished |
---|---|
Effective start/end date | 8/16/17 → 7/31/22 |
Funding
- Institute of Education Sciences (R305D170041 - 19)
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.