Variational Bayesian methods: Difference between revisions

Content deleted Content added
Koalaware (talk | contribs)
m Changed the position of a parenthetical to improve readability.
formatting
Line 6:
#To derive a [[lower bound]] for the [[marginal likelihood]] (sometimes called the ''evidence'') of the observed data (i.e. the [[marginal probability]] of the data given the model, with marginalization performed over unobserved variables). This is typically used for performing [[model selection]], the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data. (See also the [[Bayes factor]] article.)
 
In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative to [[Monte Carlo sampling]] methods — particularly, [[Markov chain Monte Carlo]] methods such as [[Gibbs sampling]] — for taking a fully Bayesian approach to [[statistical inference]] over complex [[probability distribution|distributions]] that are difficult to evaluate directly or [[sample (statistics)|sample]]. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, Variationalvariational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior.
 
Variational Bayes can be seen as an extension of the EM ([[expectation-maximization]]) algorithm from [[maximum a posteriori estimation]] (MAP estimation) of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire [[posterior distribution]] of the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically.