Layered hidden Markov model: Difference between revisions

Content deleted Content added
No edit summary
Line 1:
The '''layered [[hidden Markov model]] (LHMM)''' is a [[statistical model]] derived from the hidden Markov model (HMM).
A layered hidden Markov model (LHMM) consists of ''N'' levels of HMMs, where the HMMs on level ''i'' + 1 correspond to observation symbols or probability generators at level ''i''.
Every level ''i'' of the LHMM consists of ''K''<sub>''i''</sub> HMMs running in parallel.
Line 7:
LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed.
 
== The Layeredlayered Hiddenhidden Markov Modelmodel ==
 
A layered hidden Markov model (LHMM) consists of <math>N</math> levels of HMMs where the HMMs on level <math>N+1</math> corresponds to observation symbols or probability generators at level <math>N</math>.
Line 27:
 
== References ==
*N. Oliver, A. Garg and E. Horvitz, "Layered Representations for Learning and Inferring Office Activity from Multiple Sensory Channels", Computer Vision and Image Understanding, vol. 96, p. 163-&ndash;180, 2004.
 
[[Category:Machine learning]]