Graphical model: Difference between revisions

Content deleted Content added
Chobot (talk | contribs)
m robot Adding: ko:그래프 모형
mNo edit summary
Line 14:
 
In other words, the [[probability distribution|joint distribution]] factors into a product of conditional distributions. Any two nodes that are not connected by an arrow are [[Conditional independence|conditionally independent]] given the values of their parents. In general, any two sets of nodes are conditionally
independent, given a third set if a criterion called [[d-separation|''d''-separation]] holds in the graph.
(link d-separation to wiki entry).
 
This type of graphical model is known as a directed graphical model, [[Bayesian network]], or belief network. Classic machine learning models like [[hidden Markov models]], [[neural networks]] and newer models such as [[variable-order Markov model]]s can be considered as special cases of Bayesian networks.
Line 21 ⟶ 20:
Graphical models with undirected edges are generally called [[Markov random field]]s or [[Markov network]]s.
 
A third type of graphical model is a [[factor graph]], which is an undirected [[bipartite graph]] connecting variables and ''factor nodes''. Each factor represents a probability distribution over the variables it's connected to. In contrast to a Bayesian network, a factor may be connected to more than two nodes.
Applications of graphical models include modeling of [[gene regulatory network]]s, [[speech recognition]], gene finding, [[computer vision]] and diagnosis of diseases.
 
Applications of graphical models include [[speech recognition]], [[computer vision]], decoding of [[low-density parity-check codes]], modeling of [[gene regulatory network]]s, gene finding and diagnosis of diseases.
 
A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004). A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999).