Content deleted Content added
No edit summary |
No edit summary |
||
Line 1:
In [[probability theory]] and [[statistics]], a '''graphical model (GM)''' represents [[statistical independence|dependencies]] among [[random variable|random variables]] by a [[graph (mathematics)|graph]] in which each random variable is a node, and the edges between the nodes represent conditional dependencies.
In the simplest case, the network structure of the model is a [[directed acyclic graph]] (DAG). Then the GM represents a factorization of the joint [[probability]] of all random variables. More precisely, if the events are
Line 16:
This type of graphical model is known as a directed graphical model, [[Bayesian network]], or belief network.
Classic [[Machine learning|Machine learning]] methods like [[Hidden Markov Models|Hidden Markov Models]] or [[Neural networks|Neural networks]] can be considered as special cases of graphical models.
There are also undirected graphical models, also called [[Markov network|Markov networks]], in which graph separation encodes conditional independencies (these are also known as graphical Gaussian models, or GGMs).
|