Recent work on the convergence of posterior distributions under
Bayesian updating has established conditions under which the
posterior will concentrate on the truth, if the latter has a perfect
representation within the support of the prior, and under various
dynamical assumptions, such as the data being independent and
identically distributed or Markovian. Here I establish sufficient
conditions for the convergence of the posterior distribution in
non-parametric problems even when
all of the hypotheses are
wrong, and the data-generating process has a complicated dependence
structure. The main dynamical assumption is the generalized
asymptotic equipartition (or "Shannon-McMillan-Breiman") property of
information theory. I derive a kind of large deviations principle
for the posterior measure, and discuss the advantages of predicting
using a combination of models known to be wrong. An appendix
sketches connections between the present results and the "replicator
dynamics" of evolutionary theory.