Bayesian Model Comparison Based on Predictive Entropy

by Jukka Corander

Research Report 1999:8

 Department of Statistics, Stockholm University, S-106 91 Stockholm, Sweden

Abstract



A general criterion for determining the utility of a probability model to explain a set of empirical observations is introduced. Starting from a logarithmic utility function and a reference prior for the model parameters, we derive formally a criterion which is based on the posterior expectation of the predictive entropy, and is asymptotically equivalent to the Schwarz criterion. The difference between the negative expected predictive entropy and the maximized log-likelihood is shown to provide a valuable measure of the amount of information in the data. For some exponential models, the introduced criterion is nearly equivalent to the logarithm of a fractional marginal likelihood based on a minimal training sample, regardless of the number of observations. Various standard and non-standard problems are considered to illustrate the favorable properties of our approach.

Keywords: Bayesian model determination, Entropy, Fractional marginal likelihood, Reference prior, Utility


Close this Window

Last update: 2000-02-15/CE