Abstract
A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form μ = ∫Θμθdλ(θ). Among these, a natural representation is one whose components (μθ's) are "learnable" (one can approximate μθ by conditioning μ on observation of the process) and "sufficient for prediction" (μθ's predictions are not aided by conditioning on observation of the process). We show the existence and uniqueness of such a representation under a suitable asymptotic mixing condition on the process. This representation can be obtained by conditioning on the tail-field of the process, and any learnable representation that is sufficient for prediction is asymptotically like the tail-field representation. This result is related to the celebrated de Finetti theorem, but with exchangeability weakened to an asymptotic mixing condition, and with his conclusion of a decomposition into i.i.d. component distributions weakened to components that are learnable and sufficient for prediction.
Original language | English (US) |
---|---|
Pages (from-to) | 875-893 |
Number of pages | 19 |
Journal | Econometrica |
Volume | 67 |
Issue number | 4 |
DOIs | |
State | Published - 1999 |
Keywords
- Bayesian
- Learning
- Stochastic processes
ASJC Scopus subject areas
- Economics and Econometrics