Dynamic causal modeling

This is an old revision of this page, as edited by Peterzlondon (talk | contribs) at 11:49, 10 May 2018. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This sandbox is in the article namespace. Either move this page into your userspace, or remove the {{User sandbox}} template.

Bayesian model reduction

Bayesian model reduction is a method for computing the evidence and parameters of Bayesian models which differ in the specification of their priors. Typically, a full model is fitted to the available data with standard approaches. Then, hypotheses are tested by defining one or more 'reduced' models with alternative priors. A reduced model has more restrictive priors than the full model, which in the limit will switch off certain parameters. Bayesian model reduction is used to compute the evidence and parameters of the reduced models given the evidence and posteriors of the full model. If the priors and posteriors are Gaussian, then there is an analytic solution (detailed below) which can be computed rapidly. Bayesian model reduction has multiple scientific and engineering applications, including rapidly scoring large numbers of models and facilitating the estimation of hierarchical models (Parametric Empirical Bayes).

Theory

Consider some model with parameters   and a prior probability density on those parameters  . The posterior belief about   after seeing the data is given by Bayes rule:

The second line of this equation is the model evidence, which is the probability of observing the data given the model. In practice, the posterior cannot usually be computed analytically due to the integral. Therefore, the posteriors are estimated using approaches such as MCMC sampling or variational Bayes. A reduced model can then be defined with an alternative set of priors  :

The objective of Bayesian model reduction is to compute the posterior   and evidence   of the reduced model from the posterior   and evidence   of the full model. Combining the first two equations and re-arranging, we can express the reduced posterior as the product of the full model's posteriors, the ratio of priors and the ratio of evidences:

To compute the evidence for the reduced model, we take the integral of the parameters of each side of the equation:

And by re-arrangement:

Gaussian priors and posteriors

Under Gaussian prior and posterior densities, as are often used in the context of variational Bayes, Bayesian model reduction has a simple analytical expression. We define normal densities for the priors and posteriors:

 

Where the tilde symbol (~) indicates quantities relating to the reduced model. Subscript zero - such as   - indicates parameters of the priors. For convenience we also define precision matrices, which are simply the inverse of each covariance matrix:

 

We also assume that the free energy of the full model has been computed, which is a lower bound on the log model evidence:  . The reduced model's free energy   and parameters   are then given by the expression: