Abstract
It was known from Metropolis et al. [J. Chem. Phys. 21 (1953) 1087-1092] that one can sample from a distribution by performing Monte Carlo simulation from a Markov chain whose equilibrium distribution is equal to the target distribution. However, it took several decades before the statistical community embraced Markov chain Monte Carlo (MCMC) as a general computational tool in Bayesian inference. The usual reasons that are advanced to explain why statisticians were slow to catch on to the method include lack of computing power and unfamiliarity with the early dynamic Monte Carlo papers in the statistical physics literature. We argue that there was a deeper reason, namely, that the structure of problems in the statistical mechanics and those in the standard statistical literature are different. To make the methods usable in standard Bayesian problems, one had to exploit the power that comes from the introduction of judiciously chosen auxiliary variables and collective moves. This paper examines the development in the critical period 1980-1990, when the ideas of Markov chain simulation from the statistical physics literature and the latent variable formulation in maximum likelihood computation (i.e., EM algorithm) came together to spark the widespread application of MCMC methods in Bayesian computation.
Original language | English (US) |
---|---|
Pages (from-to) | 506-516 |
Number of pages | 11 |
Journal | Statistical Science |
Volume | 25 |
Issue number | 4 |
DOIs | |
State | Published - 2010 |
Keywords
- Data augmentation
- EM algorithm
- MCMC
ASJC Scopus subject areas
- Statistics and Probability
- General Mathematics
- Statistics, Probability and Uncertainty