Discrete-time Markov chain: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Removed proxy/dead URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine
m Maximal set is a (now redirect) page about an unrelated concept in computability theory
 
(6 intermediate revisions by 6 users not shown)
Line 75:
A state ''i'' is said to be essential or final if for all ''j'' such that ''i''&nbsp;→&nbsp;''j'' it is also true that ''j''&nbsp;→&nbsp;''i''. A state ''i'' is inessential if it is not essential.<ref>{{cite book|last=Asher Levin|first=David|title=Markov chains and mixing times|page=[https://archive.org/details/markovchainsmixi00levi_364/page/n31 16]|title-link= Markov Chains and Mixing Times |isbn=978-0-8218-4739-8|year=2009}}</ref> A state is final if and only if its communicating class is closed.
 
A Markov chain is said to be irreducible if its state space is a single communicating class; in other words, if it is possible to get to any state from any state.<ref name="PRS"/><ref name=":0Lawler">{{citeCite book |titlelast=MarkovLawler Chains:|first=Gregory FromF. Theory|author-link=Greg Lawler |title=Introduction to ImplementationStochastic andProcesses Experimentation|last=Gagniuc|first=Paul A.|publisher=JohnCRC WileyPress & Sons|year=20172006 |isbn=978-1-11958488-38755651-8X |___locationedition=USA,2nd NJ|pageslanguage=1–235en}}</ref>{{rp|20}}
 
===Periodicity===
Line 87:
 
==={{anchor|Transience}}{{anchor|Recurrence}}Transience and recurrence===
A state ''i'' is said to be transient if, given that we start in state ''i'', there is a non-zero probability that we will never return to ''i''. Formally, let the [[random variable]] ''T<sub>i</sub>'' be the first return time to state ''i'' (the "[[hitting time]]"):
 
:<math> T_i = \inf \{ n\ge1: X_n = i\}.</math>
Line 107:
 
====Positive recurrence====
Even if the hitting time is finite with probability ''1'', it need not have a finite [[expected value|expectation]]<!-- Should provide example later for this and reference it here -->. The mean recurrence time at state ''i'' is the expected return time ''M<sub>i</sub>'':
 
:<math> M_i = E[T_i]=\sum_{n=1}^\infty n\cdot f_{ii}^{(n)}.</math>
Line 118:
:<math> p_{ii} = 1\text{ and }p_{ij} = 0\text{ for }i \not= j.</math>
 
If every state can reach an absorbing state, then the Markov chain is an [[absorbing Markov chain]].<ref name=":0" /Grin>{{cite book
| first = Charles M.
| last = Grinstead
| first2 = J. Laurie
| last2 = Snell
| author-link2 = J. Laurie Snell
| title = Introduction to Probability
|date=July 1997
| publisher = American Mathematical Society
| isbn = 978-0-8218-0749-1
| chapter = Ch. 11: Markov Chains
| chapter-url = https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf}}</ref><ref name=Kem>
{{cite book
| first = John G.
| last = Kemeny
| author-link = John G. Kemeny
| first2 = J. Laurie
| last2 = Snell
| author-link2 = J. Laurie Snell
| editor-first = F. W.
| editor-last = Gehring
| editor2-first = P. R.
| editor2-last = Halmos
| title = Finite Markov Chains
| url = https://archive.org/details/finitemarkovchai00keme_792
| url-access = limited
| edition = Second
| orig-year = 1960
|date=July 1976
| publisher = Springer-Verlag
| ___location = New York Berlin Heidelberg Tokyo
| isbn = 978-0-387-90192-3
| pages = [https://archive.org/details/finitemarkovchai00keme_792/page/n235 224]
| chapter = Ch. 3: Absorbing Markov Chains
}}</ref>
 
===Reversible Markov chain{{Anchor|detailed balance}}===
Line 125 ⟶ 159:
:<math>\pi_i \Pr(X_{n+1} = j \mid X_{n} = i) = \pi_j \Pr(X_{n+1} = i \mid X_{n} = j)</math>
 
for all times ''n'' and all states ''i'' and ''j''. This condition is known as the [[detailed balance]] condition (or local [[balance equation]]).
 
Considering a fixed arbitrary time ''n'' and using the shorthand
Line 151 ⟶ 185:
[[Kolmogorov's criterion]] gives a necessary and sufficient condition for a Markov chain to be reversible directly from the transition matrix probabilities. The criterion requires that the products of probabilities around every closed loop are the same in both directions around the loop.
 
Reversible Markov chains are common in [[Markov chain Monte Carlo]] (MCMC) approaches because the detailed balance equation for a desired distribution '''{{pi}}''' necessarily implies that the Markov chain has been constructed so that '''{{pi}}''' is a steady-state distribution. Even with time-inhomogeneous Markov chains, where multiple transition matrices are used, if each such transition matrix exhibits detailed balance with the desired '''{{pi}}''' distribution, this necessarily implies that '''{{pi}}''' is a steady-state distribution of the Markov chain.
 
==== Closest reversible Markov chain ====
For any time-homogeneous Markov chain given by a transition matrix <math>P \in \mathbb{R}^{n \times n}</math>, any norm <math>||\cdot ||</math> on <math> \mathbb{R}^{n \times n}</math> which is induced by a [[Inner product space |scalar product]], and any probability vector <math>\pi</math>, there exists a unique transition matrix <math>P^*</math> which is reversible according to <math>\pi</math>
and which is closest to <math>P</math> according to the norm <math>||\cdot ||.</math> The matrix <math>P^*</math> can be computed by solving a quadratic-convex [[optimization problem]].<ref>A. Nielsen, M. Weber. Online publication in Numerical Linear Algebra with Applications, DOI:10.1002/nla.1967, 2015.</ref>
 
For example, consider the following Markov chain:
Line 161 ⟶ 195:
This Markov chain is not reversible. According to the [[Matrix norm#Frobenius norm | Frobenius Norm ]] the closest reversible Markov chain according to <math>\pi = \left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)</math> can be computed as
[[File:Mchain simple corrected C1.png|frameless|center]]
If we choose the [[probability vector]] randomly as <math>\pi=\left( \frac{1}{4}, \frac{1}{4}, \frac{1}{2} \right)</math>, then the closest reversible Markov chain according to the Frobenius norm is approximately given by
[[File:Mvchain approx C2.png|400px|frameless|center]]
 
Line 184 ⟶ 218:
|archive-url= https://web.archive.org/web/20150319050311/https://books.google.com/books?id=JBBRiuxTN0QC&pg=PA35
|archive-date= 2015-03-19
|url-access= subscription
}}</ref> in which case the unique such distribution is given by <math>\pi_i=\frac{1}{M_i}</math> where <math>M_i=\mathbb{E}(T_i)</math> is the mean recurrence time of ''i''.<ref name="PRS"/>
 
Line 232 ⟶ 267:
* [[John G. Kemeny]] & [[J. Laurie Snell]] (1960) ''Finite Markov Chains'', D. van Nostrand Company {{ISBN|0-442-04328-7}}
* E. Nummelin. "General irreducible Markov chains and non-negative operators". Cambridge University Press, 1984, 2004. {{ISBN|0-521-60494-X}}
* [[Eugene Seneta|Seneta, E.]] ''Non-negative matrices and Markov chains''. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) {{ISBN|978-0-387-29765-1}}
{{refend}}
[[zh-yue:離散時間馬可夫鏈]]