Robert E. Kass and Adrian E. Raftery
The points we emphasize are:
- from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory;
- Bayes factors offer a way of evaluating evidence in favor of a null hypothesis;
- Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis;
- Bayes factors are very general, and do not require alternative models to be nested;
- several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods;
- in ``non-standard'' statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive non-Bayesian significance tests;
- the Schwarz criterion (or BIC) gives a crude approximation to the logarithm of the Bayes factor, which is easy to use and does not require evaluation of prior distributions;
- when one is interested in estimation or prediction, Bayes factors may be converted to weights to be attached to various models so that a composite estimate or prediction may be obtained that takes account of structural or model uncertainty;
- algorithms have been proposed that allow model uncertainty to be taken into account when the class of models initially considered is very large;
- Bayes factors are useful for guiding an evolutionary model-building process;
- and, finally, it is important, and feasible, to assess the sensitivity of conclusions to the prior distributions used.
KEY WORDS: Bayesian hypothesis tests; BIC; Importance sampling; Laplace method; Markov chain Monte Carlo; Model selection; Monte Carlo integration; Posterior model probabilities; Posterior odds; Sensitivity analysis; Quadrature; Schwarz criterion; Strength of evidence.