An evaluation of statistical methods for aggregate patterns of replication failure

Jacob M. Schauer, Kaitlyn G. Fitzgerald, Sarah Peko-Spicer, Mena C.R. Whalen, Rrita Zejnullahi, Larry V. Hedges

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Several programs of research have sought to assess the replicability of scientific findings in different fields, including economics and psychology. These programs attempt to replicate several findings and use the results to say something about large-scale patterns of replicability in a field. However, little work has been done to understand the analytic methods used to do this, including what they are assessing and what their statistical properties are. This article examines several methods that have been used to study patterns of replicability in the social sciences. We describe in concrete terms how each method operationalizes the idea of “replication” and examine various statistical properties, including bias, precision and statistical power. We find that some analytic methods rely on an operational definition of replication that can be misleading. Other methods involve more sound definitions of repli-cation, but most of these have limitations, such as large bias and uncertainty or low power. The findings suggest that we should use caution interpreting the results of such analyses and that work on more accurate methods may be useful to future replication research efforts.

Original languageEnglish (US)
Pages (from-to)208-229
Number of pages22
JournalAnnals of Applied Statistics
Volume15
Issue number1
DOIs
StatePublished - 2021

Keywords

  • Bias
  • Meta-analysis
  • Power
  • Replication

ASJC Scopus subject areas

  • Statistics and Probability
  • Modeling and Simulation
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'An evaluation of statistical methods for aggregate patterns of replication failure'. Together they form a unique fingerprint.

Cite this