Graphical model: Difference between revisions

Content deleted Content added
Shae (talk | contribs)
See also: Capitalization
Kvihill (talk | contribs)
mNo edit summary
Line 1:
In [[probability theory]], [[statistics]], and [[machine learning]], a '''graphical model (GM)''' is a graph that represents [[statistical independence|dependenciesindependencies]] among [[random variable]]s by a [[graph (mathematics)|graph]] in which each node is a random variable, and the missing edges between the nodes represent conditional dependenciesindependencies.
 
Two common types of GMs correspond to graphs with directed and undirected edges. If the network structure of the model is a [[directed acyclic graph]] (DAG), the GM represents a factorization of the joint [[probability]] of all random variables. More precisely, if the events are
Line 13:
:P(''X<sub>i</sub>'' | parents of ''X<sub>i</sub>'') for ''i'' = 1,...,''n''.
 
In other words, the [[probability distribution|joint distribution]] factors into a product of conditional distributions. The graph structure indicates direct dependencies among random variables. Any two nodes that are not inconnected aby descendant/ancestoran relationshiparrow are [[Conditional independence|conditionally independent]] given the values of their parents. In general, any two sets of nodes are conditionally
independent, given a third set if a criterion called [[d-separation|"''d''-separation"]] holds in the graph.
(link d-separation to wiki entry).
 
This type of graphical model is known as a directed graphical model, [[Bayesian network]], or belief network. Classic machine learning models like [[hidden Markov models]], [[neural networks]] and newer models such as [[variable-order Markov model]]s can be considered as special cases of Bayesian networks.
 
Graphical models with undirected edges are generally called [[Markov random field]]s or [[Markov network]]s. It can be shown that they have the same representational capacity as directed graphical models. However, while directed models are better at explicitly representing the joint probability, undirected models are better for representing conditional independences.
 
Applications of graphical models include modeling of [[gene regulatory network]]s, [[speech recognition]], gene finding, [[computer vision]] and diagnosis of diseases.
 
A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004). A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999).
 
A computational reasoning approach is provided in
Pearl, Probaiblistic Reasoning in Intelligence Systems
(1988)<ref name="Pearl-88">Pearl, J. (1988) ''Probabilistic Reasoning in Intelligent Systems,'' San Mateo, CA: Morgan Kaufmann.</ref> were the relationships between graphs and
probabilities were formally introduced.
 
==See also==
Line 27 ⟶ 34:
 
==Reference==
<references/>
 
 
*[http://research.microsoft.com/%7Ecmbishop/PRML/Bishop-PRML-sample.pdf Graphical models, Chapter 8 of Pattern Recognition and Machine Learning by Christopher M. Bishop]