Abstract
Several programs of research have sought to assess the replicability of scientific findings in different fields, including economics and psychology. These programs attempt to replicate several findings and use the results to say something about large-scale patterns of replicability in a field. However, little work has been done to understand the analytic methods used to do this, including what they are assessing and what their statistical properties are. This article examines several methods that have been used to study patterns of replicability in the social sciences. We describe in concrete terms how each method operationalizes the idea of “replication” and examine various statistical properties, including bias, precision and statistical power. We find that some analytic methods rely on an operational definition of replication that can be misleading. Other methods involve more sound definitions of repli-cation, but most of these have limitations, such as large bias and uncertainty or low power. The findings suggest that we should use caution interpreting the results of such analyses and that work on more accurate methods may be useful to future replication research efforts.
Original language | English (US) |
---|---|
Pages (from-to) | 208-229 |
Number of pages | 22 |
Journal | Annals of Applied Statistics |
Volume | 15 |
Issue number | 1 |
DOIs | |
State | Published - 2021 |
Keywords
- Bias
- Meta-analysis
- Power
- Replication
ASJC Scopus subject areas
- Statistics and Probability
- Modeling and Simulation
- Statistics, Probability and Uncertainty