Content deleted Content added
{{Scholia}} |
m fix spacing around math (via WP:JWB) |
||
Line 56:
== Model estimation ==
Model inversion or estimation is implemented in DCM using [[Variational Bayesian methods|variational Bayes]] under the [[Laplace's method|Laplace assumption]].<ref>{{Citation|last1=Friston|first1=K.|date=2007|pages=606–618|publisher=Elsevier|isbn=9780123725608|last2=Mattout|first2=J.|last3=Trujillo-Barreto|first3=N.|last4=Ashburner|first4=J.|last5=Penny|first5=W.|doi=10.1016/b978-012372560-8/50047-4|chapter=Variational Bayes under the Laplace approximation|title=Statistical Parametric Mapping}}</ref> This provides two useful quantities: the log marginal likelihood or model evidence <math>\ln{p(y|m)}</math> is the probability of observing of the data under a given model. Generally, this cannot be calculated explicitly and is approximated by a quantity called the negative variational free energy <math>F</math>, referred to in machine learning as the Evidence Lower Bound (ELBO). Hypotheses are tested by comparing the evidence for different models based on their free energy, a procedure called Bayesian model comparison.
Model estimation also provides estimates of the parameters <math>p(\theta|y)</math>, for example connection strengths, which maximise the free energy. Where models differ only in their priors, [[Bayesian model reduction|Bayesian Model Reduction]] can be used to derive the evidence and parameters of nested or reduced models analytically and efficiently.
|