Bayesian model reduction is a method for computing the evidence and parameters of Bayesian models which differ only in the specification of their priors. Typically, a 'full' model is fitted to the available data with standard approaches. Then,Hypotheses hypothesesare arethen tested by defining one or more 'reduced' models, which differ only in their priors. In the context of variational Bayes, theThe evidence and parameters of the reduced models can be computed analytically from the evidence and parameters of the full model. ThisIf the priors and posteriors are Gaussian, this has numerousan analytic solution which can be computed rapidly. Bayesian model reduction has multiple scientific and engineering applications, including rapidly scoring large numbers of models, and facilitating the estimation of hierarchical models (Parametric Empirical Bayes) models.
==== Theory ====
Consider some model with parameters <math>\theta</math> and a prior probability density on those parameters <math>Pp(\theta)</math>. The posterior belief about <math>\theta</math> after seeing the data <math>p(\theta|y)</math> is given by Bayes rule:
<math>\begin{align}
Line 12:
\end{align}</math>
The second line definesof the equation is the model evidence, which is the probability of observing the given data undergiven athe model with these parameters. In practice, the posterior cannot usually be computed analytically due to the difficult integral. Therefore, the posteriors are estimated using approaches such as MCMC sampling or variational Bayes. Having estimated the posteriors and evidence using one of these approaches, we can next define a reduced model can be defined with an alternative set of priors <math>\tilde{p}(\theta)</math>:
The objective is to compute the reduced posterior <math>\tilde{p}(\theta|y)</math> and evidence <math>\tilde{p}(y)</math> from the full posterior <math>p(\theta|y)</math> and evidence <math>p(y)</math>. We can express this as follows: