Content deleted Content added
m Bot: link syntax and minor changes |
mNo edit summary |
||
Line 34:
==Partially observable Markov decision process==
{{main|Partially observable Markov decision process}}
A [[POMDP|partially observable Markov decision process]] (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be [[NP complete]], but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.<ref>{{cite journal
| title = Planning and acting in partially observable stochastic domains
Line 48 ⟶ 49:
==Markov random field==
{{main|Markov random field}}
A [[Markov random field]], or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.
|