Content deleted Content added
Citation bot (talk | contribs) Alter: title, template type. Add: isbn, volume, date, series, issue, pages, chapter-url, chapter, authors 1-1. Removed or converted URL. Removed access-date with no URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 349/591 |
m compound modifier |
||
Line 1:
{{Short description|Mathematical model for sequential decision making under uncertainty}}
'''Markov decision process''' ('''MDP'''), also called a [[Stochastic dynamic programming|stochastic dynamic program]] or stochastic control problem, is a model for [[sequential decision making]] when [[Outcome (probability)|outcomes]] are uncertain.<ref>{{Cite book |last=Puterman |first=Martin L. |title=Markov decision processes: discrete stochastic dynamic programming |date=1994 |publisher=Wiley |isbn=978-0-471-61977-2 |series=Wiley series in probability and mathematical statistics. Applied probability and statistics section |___location=New York}}</ref>
Line 206 ⟶ 207:
=== Learning automata ===
{{main|Learning automata}}
Another application of MDP process in [[machine learning]] theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detail '''learning automata''' paper is surveyed by [[Kumpati S. Narendra|Narendra]] and Thathachar (1974), which were originally described explicitly as [[finite
In learning automata theory, '''a stochastic automaton''' consists of:
|