Markov model: Difference between revisions

Content deleted Content added
Rht1369 (talk | contribs)
No edit summary
repair <ref>
Line 26:
 
==Partially observable Markov decision process==
A [[POMDP|partially observable Markov decision process]] (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be [[NP complete]], but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.<ref>{{cite journal
| title name=" Planning and acting in partially observable stochastic domains">[
| url = http://www.sciencedirect.com/science/article/pii/S000437029800023X],
| author1 = Kaelbling, Leslie Pack, Michael L. Littman, and Anthony R. Cassandra. "Planning and acting in partially observable stochastic domains." Artificial intelligence 101, no. 1 (1998): 99-134.</ref>
| journal = Artificial intelligence
| volume = vol 101, no. 1
| year = 1998
| pages = 99&ndash;134
}}</ref>
 
==Markov random field==