Bayesian representation of stochastic processes under learning: De Finetti revisited

Matthew O. Jackson, Ehud Kalai, Rann Smorodinsky

Research output: Contribution to journalArticlepeer-review

28 Scopus citations


A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form μ = ∫Θμθdλ(θ). Among these, a natural representation is one whose components (μθ's) are "learnable" (one can approximate μθ by conditioning μ on observation of the process) and "sufficient for prediction" (μθ's predictions are not aided by conditioning on observation of the process). We show the existence and uniqueness of such a representation under a suitable asymptotic mixing condition on the process. This representation can be obtained by conditioning on the tail-field of the process, and any learnable representation that is sufficient for prediction is asymptotically like the tail-field representation. This result is related to the celebrated de Finetti theorem, but with exchangeability weakened to an asymptotic mixing condition, and with his conclusion of a decomposition into i.i.d. component distributions weakened to components that are learnable and sufficient for prediction.

Original languageEnglish (US)
Pages (from-to)875-893
Number of pages19
Issue number4
StatePublished - 1999


  • Bayesian
  • Learning
  • Stochastic processes

ASJC Scopus subject areas

  • Economics and Econometrics


Dive into the research topics of 'Bayesian representation of stochastic processes under learning: De Finetti revisited'. Together they form a unique fingerprint.

Cite this