
Entropy Bounds on Bayesian Learning
Gossner, Olivier; Tomala, Tristan (2008), Entropy Bounds on Bayesian Learning, Journal of Mathematical Economics, 44, 1, p. 24-32. http://dx.doi.org/10.1016/j.jmateco.2007.04.006
View/ Open
Type
Article accepté pour publication ou publiéDate
2008Journal name
Journal of Mathematical EconomicsVolume
44Number
1Publisher
Elsevier
Pages
24-32
Publication identifier
Metadata
Show full item recordAbstract (EN)
An observer of a process View the MathML source believes the process is governed by Q whereas the true law is P. We bound the expected average distance between P(xt|x1,…,xt−1) and Q(xt|x1,…,xt−1) for t=1,…,n by a function of the relative entropy between the marginals of P and Q on the n first realizations. We apply this bound to the cost of learning in sequential decision problems and to the merging of Q to P.Subjects / Keywords
Bayesian learning; Repeated decision problem; Value of information; EntropyRelated items
Showing items related by title and author.
-
Gossner, Olivier; Tomala, Tristan (2003) Document de travail / Working paper
-
Gossner, Olivier; Tomala, Tristan (2007) Article accepté pour publication ou publié
-
Tomala, Tristan; Laraki, Rida; Gossner, Olivier (2009) Article accepté pour publication ou publié
-
Gossner, Olivier; Laraki, Rida; Tomala, Tristan (2004-11) Document de travail / Working paper
-
Gossner, Olivier; Tomala, Tristan (2005) Document de travail / Working paper