Uniformization (probability theory): Difference between revisions

Content deleted Content added
copy edit
 
(35 intermediate revisions by 17 users not shown)
Line 1:
In [[probability theory]], '''uniformization''' method, (also known as '''Jensen's method'''<ref name="stewart" /> or the '''randomization method'''<ref name="ibe">{{cite book |title=Markov processes for stochastic modeling |url=https://archive.org/details/markovprocessesf00ibeo |url-access=limited |last=Ibe |first=Oliver C. |year=2009 |publisher=[[Academic Press]] |isbn=01237445120-12-374451-2 |page=[https://archive.org/details/markovprocessesf00ibeo/page/n112 98]}}</ref>) is a method to compute transient solutions of finite state [[continuous-time Markov chain]]s., Theby word uniformization in its narrower sense involvesapproximating the transformationprocess ofby a continuous time Markov chain to an analgous [[discrete -time Markov chain]].<ref name="ibe" />. ThisThe original chain is thenscaled randomizedby the fastest transition rate ''γ'', so that is,transitions theoccur timesat betweenthe changessame arerate noin longerevery constantstate, buthence exponentiallythe distributedname. The method is simple to program and efficiently calculates an approximation to the transient distribution at a single point in time (near zero).<ref name="stewart" /> The method was first introduced by Winfried Grassmann in 1977.<ref>{{Cite journal | last1 = Gross | first1 = D. | last2 = Miller | first2 = D. R. | title = The Randomization Technique as a Modeling Tool and Solution Procedure for Transient Markov Processes | journal = Operations Research | volume = 32 | issue = 2 | pages = 343–361 | doi = 10.1287/opre.32.2.343 | year = 1984 }}</ref><ref>{{Cite journal | last1 = Grassmann | first1 = W. K. | doi = 10.1016/0305-0548(77)90007-7 | title = Transient solutions in markovian queueing systems | journal = Computers & Operations Research | volume = 4 | pages = 47–00 | year = 1977 }}</ref><ref>{{Cite journal | last1 = Grassmann | first1 = W. K.| title = Transient solutions in Markovian queues | doi = 10.1016/0377-2217(77)90049-2 | journal = European Journal of Operational Research | volume = 1 | issue = 6 | pages = 396–402| year = 1977 }}</ref>
 
==Method description==
For a continuous time Markov chain with transition rate matrix ''Q'', the uniformized discrete time Markov chain has probability transition matrix ''P'' calculated by<ref name="stewart">{{cite book |title=Probability, Markov chains, queues, and simulation: the mathematical basis of performance modeling|last=Stewart |first=William J. |year=2009 |publisher=[[Princeton University Press]] |isbn=0691140626 |page=361}}</ref><ref name="cass">{{cite book |title=Introduction to discrete event systems|last=Cassandras |first=Christos G. |last2=Lafortune| first2=Stéphane|year=2008 |publisher=Springer |isbn=0387333320}}</ref><ref name="ross">{{cite book |title=Introduction to probability models|last=Ross |first=Sheldon M. |year=2007 |publisher=Academic Press |isbn=0125980620}}</ref>
 
For a continuous -time [[Markov chain]] with [[transition rate matrix]] ''Q'', the uniformized discrete -time Markov chain has probability transition matrix ''<math>P'':=(p_{ij})_{i,j}</math>, which is calculateddefined by<ref name="stewart">{{cite book |title=Probability, Markov chains, queues, and simulation: the mathematical basis of performance modeling|url=https://archive.org/details/probabilitymarko00stew|url-access=limited|last=Stewart |first=William J. |year=2009 |publisher=[[Princeton University Press]] |isbn=06911406260-691-14062-6 |page=[https://archive.org/details/probabilitymarko00stew/page/n379 361]}}</ref><ref name="cass">{{cite book |title=Introduction to discrete event systems|last=Cassandras |first=Christos G. |last2=Lafortune| first2=Stéphane|year=2008 |publisher=Springer |isbn=03873333200-387-33332-0}}</ref><ref name="ross">{{cite book |title=Introduction to probability models|last=Ross |first=Sheldon M. |year=2007 |publisher=Academic Press |isbn=01259806200-12-598062-0}}</ref>
::<math>p_{ij} = \begin{cases} q_{ij}/\gamma &\text{ if } i \neq j \\ 1 - \sum_{j \neq i} q_{ij}/\gamma &\text{ if } i=j \end{cases}</math>
 
::<math>p_{ij} = \begin{cases} q_{ij}/\gamma &\text{ if } i \neq j \\ 1 - \sum_{jk \neq i} q_{ijik}/\gamma &\text{ if } i=j \end{cases}</math>
with <math>\gamma</math> chosen such that
 
with ''γ'', the uniform rate parameter, chosen such that
::<math>\gamma \geq \max_i |\sum_{j} q_{ij}|.</math>
 
::<math>\gamma \geq \max_i |\sum_{j} q_{ijii}|.</math>
Randomizing the discrete-time Markov chain now results in the following formula for the solution of <math>P(t)</math>, the transient solution of the continuous-time Markov chain
 
In matrix notation:
::<math>P(t) = \sum_{n=0}^{\infty} P^n e^{\gamma t} (\gamma t)^n/n!</math>
::<math>P=I+\frac{1}{\gamma}Q.</math>
 
For a starting distribution {{pi}}(0), the distribution at time ''t'', {{pi}}(''t'') is computed by<ref name="stewart" />
 
::<math>P\pi(t) = \sum_{n=0}^{\infty} \pi(0) P^n e^{\gamma t} frac{(\gamma t)^n/}{n!}e^{-\gamma t}.</math>
 
This representation shows that a continuous-time Markov chain can be described by a discrete Markov chain with transition matrix ''P'' as defined above where jumps occur according to a Poisson process with intensity&nbsp;''γt''.
 
In practice this [[series (mathematics)|series]] is terminated after finitely many terms.
 
==Implementation==
 
[[Pseudocode]] for the algorithm is included in Appendix A of Reibman and Trivedi's 1988 paper.<ref name="reibman">{{Cite journal | last1 = Reibman | first1 = A. | last2 = Trivedi | first2 = K. | doi = 10.1016/0305-0548(88)90026-3 | title = Numerical transient analysis of markov models | journal = Computers & Operations Research | volume = 15 | pages = 19 | year = 1988 | url = http://people.ee.duke.edu/~kst/markovpapers/numerical.pdf}}</ref> Using a [[parallel algorithm|parallel]] version of the algorithm, chains with state spaces of larger than 10<sup>7</sup> have been analysed.<ref>{{Cite journal | last1 = Dingle | first1 = N. | last2 = Harrison |first2 = P. G. | author-link2 = Peter G. Harrison| last3 = Knottenbelt | first3= W. J.| title = Uniformization and hypergraph partitioning for the distributed computation of response time densities in very large Markov models | doi = 10.1016/j.jpdc.2004.03.017 | url = http://aesop.doc.ic.ac.uk/pubs/markov-uniformization-jpdc/| journal = Journal of Parallel and Distributed Computing | volume = 64 | issue = 8 | pages = 908–920 | year = 2004 | hdl = 10044/1/5771 | hdl-access = free }}</ref>
 
==Limitations==
 
Reibman and Trivedi state that "uniformization is the method of choice for typical problems," though they note that for [[stiff equation|stiff]] problems some tailored algorithms are likely to perform better.<ref name="reibman" />
 
==External links==
*[http://www.sis.pitt.edu/~dtipper/2130/unifm.m Matlab implementation]
 
==Notes==
Line 20 ⟶ 39:
 
[[Category:Queueing theory]]
[[Category:Stochastic processes]]
[[Category:Markov processes]]