This chapter provides a nonstatistical way of summarizing many of the main points in the preceding chapters. In particular, it takes the major assumptions outlined and translates them from formal statistical notation into ordinary English. The emphasis is on expressing specific violations of formal meta-analytic assumptions as concretely labeled threats to valid inference. This explicitly integrates statistical approaches to meta-analysis with a falsificationist framework that stresses how secure knowledge depends on ruling out alternative interpretations. Thus, we aim to refocus readers' attention on the major rationales for research synthesis and the kinds of knowledge meta-analysts seek to achieve. The special promise of meta-analysis is to foster empirical knowledge about general associations, especially causal ones, that is more secure than what other methods typically warrant. In our view, no rationale for meta-analysis is more important than its ability to identify the realm of application of a knowledge claim - that is, identifying whether the association holds with specific populations of persons, settings, times and ways of varying the cause or measuring the effect; holds across different populations of people, settings, times, and ways of operationalizing a cause and effect; and can even be extrapolated to other populations of people, settings, times, causes, and effects than those studied to date. These are all generalization tasks that researchers face, perhaps no one more explicitly than the meta-analyst. It is easy to justify why we translate violated statistical assumptions into threats to validity, particularly threats to the validity of conclusions regarding the generality of an association. The past twenty-five years of meta-analytic practice have amply demonstrated that primary studies rarely present a census or even a random sample of the populations, universes, categories, classes, or entities (terms we use interchangeably) about which generalizations are sought. The salient exception is when random sampling occurs from some clearly designated universe, a procedure that does warrant valid generalization to the population from which the sample was drawn, usually a human population in the social sciences. But most surveys take place in decidedly restricted settings (a living room, for instance) and at a single time, and the relevant cause and effect constructs are measured without randomly selecting items. Moreover, many people are not interested in the population a particular random sample represents, but ask instead whether that same association hold with a different kind of person, in a different setting, at a different time, or with a different cause or effect. These questions concern generalization as extrapolation rather than representation (Cook 1990). How can we extrapolate from studied populations to populations with many, few, or even no overlapping attributes? The sad reality is that the general inferences meta-analysis seeks to provide cannot depend on formal sampling theory alone. Other warrants are also needed. This chapter assumes that ruling out threats to validity can serve as one such warrant. Doing so is not as simple or as elegant as sampling with known probability from a well-designated universe, but it is more flexible and has been used with success to justify how manipulations or measures are chosen to represent cause and effect constructs (that is, construct validity). If meta-analysis is to deal with generalization understood as both representation and extrapolation, we need ways of using a particular database to justify reasonable conclusions about what the available samples represent and how they can be used to extrapolate to other kinds of persons, settings, times, causes, and effects. This chapter is not the first to propose that a framework of validity threats allows us to probe the validity of research inferences when a fundamental statistical assumption has been violated. Donald Campbell introduced his internal validity threats for instances when primary studies lack random assignment, creating quasi-experimental design as a legitimate extension of the thinking R. A. Fisher had begun (Campbell 1957; Campbell and Stanley 1963). Similarly, this chapter seeks to identify threats to valid inferences about generalization that arise in metaanalyses, particularly those that follow from infrequent random sampling. Of course, Donald Campbell and Julian Stanley also had a list of threats to external validity, and these also have to do with generalization (1963). But their list was far from complete and was developed more with primary studies in mind than with research syntheses. The question this chapter asks is how one can proceed to justify claims about the generality of an association when the within-study selection of persons, settings, times, and measures is almost never random and when it is also not even reasonable to assume that the available sample of studies is itself unbiased. This chapter proposes a threats-to-validity approach rooted in a theory of construct validity as one way to throw provisional light on how to justify general inferences.
|Original language||English (US)|
|Title of host publication||The Hand. of Res. Synthesis and Meta-Analysis, 2nd Ed.|
|Publisher||Russell Sage Foundation|
|Number of pages||24|
|State||Published - Dec 1 2009|
ASJC Scopus subject areas
- Social Sciences(all)