Discrete-time Markov chain: Difference between revisions

Content deleted Content added
add stationary distribution and ergodic theorem; move n-step transitions earlier
more stationary distribution content from Markov chain (see its history for attribution)
Line 158:
This condition implies that <math>\pi P^n=\pi</math> and hence if the Markov chain <math>(X_n, n\in \mathbb{N})</math> has initial distribution <math>X_0 = \pi</math> then <math>X_n = \pi</math> (in distribution) for any <math>n\in\mathbb{N}</math>.<ref name="PRS"/>
 
If a Markov chain is irreducible then it has a stationary distribution if and only if it is nullpositive recurrent, in which case the unique such distribution is given by <mathref>\pi_i=\frac{1}{M_i}</math> where <math>M_i=\mathbb{E}(T_i)</math> is the mean recurrence time of ''i''.<ref name="PRS"/>citation
|last= Serfozo
|first= Richard
|doi= 10.1007/978-3-540-89332-5
|isbn= 978-3-540-89331-8
|mr= 2484222
|page= 35
|title= Basics of Applied Stochastic Processes
|url= https://books.google.com/books?id=JBBRiuxTN0QC&pg=PA35
|year= 2009
|journal= Probability and Its Applications
|url-status= live
|archiveurl= https://web.archive.org/web/20150319050311/https://books.google.com/books?id=JBBRiuxTN0QC&pg=PA35
|archivedate= 2015-03-19
}}</ref> in which case the unique such distribution is given by <math>\pi_i=\frac{1}{M_i}</math> where <math>M_i=\mathbb{E}(T_i)</math> is the mean recurrence time of ''i''.<ref name="PRS"/>
 
If a chain has more than one closed communicating class, its stationary distributions will not be unique (consider any [[#Reducibility|closed communicating class]] <math>C_i</math> in the chain; each one will have its own unique stationary distribution <math>\pi_i</math>. Extending these distributions to the overall chain, setting all values to zero outside the communication class, yields that the set of invariant measures of the original chain is the set of all convex combinations of the <math>\pi_i</math>'s). However, if a state ''j'' is aperiodic, then
<math>\lim\nolimits_{n \rightarrow \infty} p_{jj}^{(n)} = \tfrac{C}{M_j}</math>
and for any other state ''i'', let ''f<sub>ij</sub>'' be the probability that the chain ever visits state ''j'' if it starts at&nbsp;''i'',
<math>\lim\nolimits_{n \rightarrow\infty} p_{ij}^{(n)} = C \tfrac{f_{ij}}{M_j}.</math>
 
If a state ''i'' is periodic with period ''k''&nbsp;>&nbsp;1 then the limit
<math>\lim\nolimits_{n \rightarrow \infty} p_{ii}^{(n)}</math>
does not exist, although the limit
<math>\lim\nolimits_{n \rightarrow \infty} p_{ii}^{(kn+r)}</math>
does exist for every integer&nbsp;''r''.
 
===Steady-state analysis and the time-inhomogeneous Markov chain===
A Markov chain need not necessarily be time-homogeneous to have an equilibrium distribution. If there is a probability distribution over states <math>\boldsymbol{\pi}</math> such that
 
:<math>\pi_j = \sum_{i \in S} \pi_i \, \Pr(X_{n+1} = j \mid X_n = i)</math>
 
for every state ''j'' and every time ''n'' then <math>\boldsymbol{\pi}</math> is an equilibrium distribution of the Markov chain. Such can occur in Markov chain Monte Carlo (MCMC) methods in situations where a number of different transition matrices are used, because each is efficient for a particular kind of mixing, but each matrix respects a shared equilibrium distribution.
 
==Hitting times==