TY - GEN
T1 - On convergence of beliefs in a non-Bayesian model of learning
AU - Molavi, Pooya
AU - Jadbabaie, Ali
PY - 2010
Y1 - 2010
N2 - We study and analyze a non-Bayesian model of learning that was recently proposed by Epstein et al. in [1]. In this model, an agent uses an i.i.d. sequence of observations to update her belief on the true state of the world. The agent receives a series of signals that are generated randomly according to a probability distribution that depends on the true state of the world. The model differs from the standard Bayesian model in that the agent exhibits a bias towards her prior belief. Instead of using Bayes' rule to incorporate new information, the agent takes a convex combination of her prior and the Bayesian update to form the posterior belief. In [1], the authors show that even though in this model the agent underreacts repeatedly to new information, the forecast of the future is asymptotically almost surely correct. In this paper, we prove a much stronger result and show that in the absence of identification problems, the agent will asymptotically almost surely learn the unknown state of the world. We also use a linearization of the update governing evolution of the agent's belief to find the rate of learning and bound it in terms of the Kullback-Leibler divergence of the signal distributions under the true state and other states.
AB - We study and analyze a non-Bayesian model of learning that was recently proposed by Epstein et al. in [1]. In this model, an agent uses an i.i.d. sequence of observations to update her belief on the true state of the world. The agent receives a series of signals that are generated randomly according to a probability distribution that depends on the true state of the world. The model differs from the standard Bayesian model in that the agent exhibits a bias towards her prior belief. Instead of using Bayes' rule to incorporate new information, the agent takes a convex combination of her prior and the Bayesian update to form the posterior belief. In [1], the authors show that even though in this model the agent underreacts repeatedly to new information, the forecast of the future is asymptotically almost surely correct. In this paper, we prove a much stronger result and show that in the absence of identification problems, the agent will asymptotically almost surely learn the unknown state of the world. We also use a linearization of the update governing evolution of the agent's belief to find the rate of learning and bound it in terms of the Kullback-Leibler divergence of the signal distributions under the true state and other states.
UR - http://www.scopus.com/inward/record.url?scp=79952392139&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79952392139&partnerID=8YFLogxK
U2 - 10.1109/ALLERTON.2010.5707046
DO - 10.1109/ALLERTON.2010.5707046
M3 - Conference contribution
AN - SCOPUS:79952392139
SN - 9781424482146
T3 - 2010 48th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2010
SP - 1174
EP - 1178
BT - 2010 48th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2010
T2 - 48th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2010
Y2 - 29 September 2010 through 1 October 2010
ER -