Markov decision process: Difference between revisions

Content deleted Content added
Arachnidly (talk | contribs)
External links: removed link as per WP:NOBLOGS and WP:RICHMEDIA - links to books and research papers in references section atp
Arachnidly (talk | contribs)
Line 148:
 
====Discrete space: Linear programming formulation====
If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes an [[Ergodicity|ergodic]] continuous-time Markov chain under a stationary [[policy]]. Under this assumption, although the decision maker can make a decision at any time atin the current state, theythere couldis notno benefit more byin taking moremultiple than one actionactions. It is better for them to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,(for<ref>{{Cite detail check Corollary 3.14 ofbook [|url=https://wwwlink.springer.com/mathematics/applications/book/10.1007/978-3-642-0254602547-41 ''|title=Continuous-Time Markov Decision Processes'']), |language=en |doi=10.1007/978-3-642-02547-1}}</ref> if our optimal value function <math>V^*</math> is independent of state <math>i</math>, we will have the following inequality:
:<math>g\geq R(i,a)+\sum_{j\in S}q(j\mid i,a)h(j) \quad \forall i \in S \text{ and } a \in A(i)</math>
If there exists a function <math>h</math>, then <math>\bar V^*</math> will be the smallest <math>g</math> satisfying the above equation. In order to find <math>\bar V^*</math>, we could use the following linear programming model: