Hidden Markov model: Difference between revisions

Content deleted Content added
m link [sS]olar irradiance
Citation bot (talk | contribs)
Add: url. | Use this bot. Report bugs. | Suggested by Corvus florensis | #UCB_webform 1009/3500
Line 174:
All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general <math>K</math> adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an <math>O(N^K \, T)</math> running time, for <math>K</math> adjacent states and <math>T</math> total observations (i.e. a length-<math>T</math> Markov chain).
 
Another recent extension is the ''triplet Markov model'',<ref name="TMM">{{cite journal |doi=10.1016/S1631-073X(02)02462-7 |volume=335 |issue=3 |title=Chaı̂nes de Markov Triplet |year=2002 |journal=Comptes Rendus Mathématique |pages=275–278 |last1=Pieczynski |first1=Wojciech|url=https://comptes-rendus.academie-sciences.fr/mathematique/articles/10.1016/S1631-073X(02)02462-7/ }}</ref> in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the ''theory of evidence'' and the ''triplet Markov models''<ref name="TMMEV">{{cite journal |doi=10.1016/j.ijar.2006.05.001 |volume=45 |title=Multisensor triplet Markov chains and theory of evidence |year=2007 |journal=International Journal of Approximate Reasoning |pages=1–16 |last1=Pieczynski |first1=Wojciech|doi-access=free }}</ref> and which allows to fuse data in Markovian context<ref name="JASP">[http://asp.eurasipjournals.com/content/pdf/1687-6180-2012-134.pdf Boudaren et al.], M. Y. Boudaren, E. Monfrini, W. Pieczynski, and A. Aissani, Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context, EURASIP Journal on Advances in Signal Processing, No. 134, 2012.</ref> and to model nonstationary data.<ref name="TSP">[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1468502&contentType=Journals+%26+Magazines&searchField%3DSearch_All%26queryText%3Dlanchantin+pieczynski Lanchantin et al.], P. Lanchantin and W. Pieczynski, Unsupervised restoration of hidden non stationary Markov chain using evidential priors, IEEE Transactions on Signal Processing, Vol. 53, No. 8, pp. 3091-3098, 2005.</ref><ref name="SPL">[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6244854&contentType=Journals+%26+Magazines&searchField%3DSearch_All%26queryText%3Dboudaren Boudaren et al.], M. Y. Boudaren, E. Monfrini, and W. Pieczynski, Unsupervised segmentation of random discrete data hidden with switching noise distributions, IEEE Signal Processing Letters, Vol. 19, No. 10, pp. 619-622, October 2012.</ref> Note that alternative multi-stream data fusion strategies have also been proposed in the recent literature, e.g.<ref>Sotirios P. Chatzis, Dimitrios Kosmopoulos, [https://ieeexplore.ieee.org/document/6164251/ "Visual Workflow Recognition Using a Variational Bayesian Treatment of Multistream Fused Hidden Markov Models,"] IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 7, pp. 1076-1086, July 2012.</ref>
 
Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012.<ref name="Reservoir-HMM">{{cite journal |last1=Chatzis |first1=Sotirios P. |last2=Demiris |first2=Yiannis |year=2012 |title=A Reservoir-Driven Non-Stationary Hidden Markov Model |journal=Pattern Recognition |volume=45 |issue=11 |pages=3985–3996 |doi=10.1016/j.patcog.2012.04.018|bibcode=2012PatRe..45.3985C |hdl=10044/1/12611 |hdl-access=free }}</ref> It consists in employing a small recurrent neural network (RNN), specifically a reservoir network,<ref>M. Lukosevicius, H. Jaeger (2009) Reservoir computing approaches to recurrent neural network training, Computer Science Review '''3''': 127–149.</ref> to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, we eventually obtain a nonstationary HMM the transition probabilities of which evolve over time in a manner that is inferred from the data itself, as opposed to some unrealistic ad-hoc model of temporal evolution.