Dynamic causal modeling: Difference between revisions

Content deleted Content added
mNo edit summary
mNo edit summary
Line 2:
<!-- EDIT BELOW THIS LINE -->
== Bayesian model reduction ==
Bayesian model reduction <ref name=":0">{{Cite journal|last=Friston|first=Karl|last2=Penny|first2=Will|date=June 2011|title=Post hoc Bayesian model selection|url=https://doi.org/10.1016/j.neuroimage.2011.03.062|journal=NeuroImage|volume=56|issue=4|pages=2089–2099|doi=10.1016/j.neuroimage.2011.03.062|issn=1053-8119|pmc=PMC3112494|pmid=21459150|via=}}</ref><ref name=":1">{{Cite journal|last=Friston|first=Karl J.|last2=Litvak|first2=Vladimir|last3=Oswal|first3=Ashwini|last4=Razi|first4=Adeel|last5=Stephan|first5=Klaas E.|last6=van Wijk|first6=Bernadette C.M.|last7=Ziegler|first7=Gabriel|last8=Zeidman|first8=Peter|date=March 2016|title=Bayesian model reduction and empirical Bayes for group (DCM) studies|url=https://doi.org/10.1016/j.neuroimage.2015.11.015|journal=NeuroImage|volume=128|pages=413–431|doi=10.1016/j.neuroimage.2015.11.015|issn=1053-8119|pmc=PMC4767224|pmid=26569570|via=}}</ref> is a method for computing the [[Marginal likelihood|evidence]] and parameters of [[Bayesian statistics|Bayesian]] models which differ in the specification of their [[Prior probability|priors]]. A full model is fitted to the available data using standard approaches. Hypotheses are then tested by defining one or more 'reduced' models with alternative (and usually more restrictive) priors, which in the limit will switch off certain parameters. The evidence and parameters of the reduced models can then be computed from the evidence and estimated ([[Posterior probability|posterior]]) parameters of the full model using Bayesian model reduction. If the priors and posteriors are [[Normal distribution|normally distributed]], then there is an analytic solution which can be computed rapidly. This has multiple scientific and engineering applications, including rapidly scoring the evidence for large numbers of models and facilitating the estimation of hierarchical models ([[Empirical Bayes method|Parametric Empirical Bayes]]).
 
== Theory ==
Line 47:
 
== Gaussian priors and posteriors ==
Under Gaussian prior and posterior densities, as are used in the context of [[Variational Bayesian methods|variational Bayes]], Bayesian model reduction has a simple analytical solution <ref name=":0" />. First define normal densities for the priors and posteriors:
 
{{NumBlk|:|
Line 83:
[[File:Example full and reduced priors.png|thumb|Example priors. In a 'full' model, left, a parameter has a Gaussian prior with mean 0 and standard deviation 0.5. In a 'reduced' model, right, the same parameter has prior mean zero and standard deviation 1/1000. Bayesian model reduction enables the evidence and parameter(s) of the reduced model to be derived from the evidence and parameter(s) of the full model.]]
 
Consider a model with a parameter <math>\theta</math> and Gaussian prior <math>p(\theta)=N(0,0.5^2)</math>, which is the Normal distribution with mean zero and standard deviation 0.5 (illustrated in the Figure, left). This prior says that without any data, thisthe parameter is expected to have value zero, but we are willing to entertain positive or negative values (with a 99% confidence interval [-1.16 1.16]). The model with this prior is fitted to the data, to provide an estimate of the parameter <math>q(\theta)</math> and the model evidence <math>p(y)</math>.
 
To assess whether the parameter contributed to the model evidence, i.e. whether we learnt anything about this parameter, an alternative 'reduced' model is specified in which the parameter has a prior with a much smaller variance: e.g. <math>\tilde{p}_0=N(0,10000.001^2)</math>. This is illustrated in the Figure (right). This prior effectively 'switches off' the parameter, saying that we are almost certain that it has value zero. The parameter <math>\tilde{q}(\theta)</math> and evidence <math>\tilde{p}(y)</math> for this reduced model are rapidly computed from the full model using Bayesian model reduction.
 
The hypothesis that the parameter contributed to the model is then tested by comparing the full and reduced models via the [[Bayes factor]], which is the ratio of model evidences:
Line 91:
<math>BF=\frac{p(y)}{\tilde{p}(y)}</math>
 
The larger this ratio, the greater the evidence for the full model, which included the parameter as a free parameter. Conversely, the stronger the evidence for the reduced model, the more confidentlyconfident we can concludebe that the parameter did not contribute and can eliminated from the model. Note this method is not specific to comparing 'switched on' or 'switched off' parameters, and any intermediate setting of the priors could also be evaluated.
 
== Applications ==