705

**On the Calibration of Bayesian Model Choice Criteria**

**Pantelis K. Vlachos and Alan E. Gelfand**

### Abstract:

Model choice is a fundamental problem in data analysis. With interest
in hierarchical models which typically arise as Bayesian
specifications, we confine ourselves to Bayesian model choice
criteria. If **Y** denotes the observed data and *T*(**Y**)
is the
criterion, our goal is to calibrate *T*(**Y**) in order to assess how
large or small it is under a given model. In particular, if we have
the distribution of *T*(**Y**), then we can compute any
probabilities or determine any quantities of interest.

Apart from very special cases, analytic development of such
distributions is intractable. Standard analytic approximations may be
inapplicable if usual random effects are introduced at the various
modeling levels. Indeed, calculation of *T*(**Y**) itself is often
difficult enough.

We suggest a generic simulation-intensive approach for obtaining the
distribution of *T*(**Y**) to arbitrary accuracy. We focus on various
Bayes factors, e.g., the usual Bayes factor, the posterior Bayes
factor and the pseudo-Bayes factor. We illustrate with a binomial
regression example.

*Keywords:* Bayes factor, Monte Carlo, pairwise model comparison,
posterior Bayes factor, pseudo-Bayes factor.

*Heidi Sestrich*

*11/3/1999*
Here is the full postscript text for this
technical report. It is 503295 bytes long.