Layered hidden Markov model: Difference between revisions

Content deleted Content added
m The layered hidden Markov model: clean up and formatting, typo(s) fixed: For example → For example, (2)
Link suggestions feature: 1 link added.
 
(2 intermediate revisions by 2 users not shown)
Line 1:
{{Short description|Multilevel, non-directly observable 'probability engine'}}
The '''layered [[hidden Markov model]] (LHMM)''' is a [[statistical model]] derived from the hidden Markov model (HMM).
AThe '''layered hidden Markov model''' ('''LHMM''') is a [[statistical model]] derived from the [[hidden Markov model]] (HMM).
A layered hidden Markov model consists of ''N'' levels of HMMs, where the HMMs on level ''i'' + 1 correspond to observation symbols or probability generators at level ''i''.
Every level ''i'' of the LHMM consists of ''K''<sub>''i''</sub> HMMs running in parallel.<ref>N. Oliver, A. Garg and E. Horvitz, "Layered Representations for Learning and Inferring Office Activity from Multiple Sensory Channels", Computer Vision and Image Understanding, vol. 96, p. 163&ndash;180, 2004.
</ref>
Line 6 ⟶ 7:
== Background ==
 
LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough [[Training, validation, and test data sets|training data]] were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed.
 
== The layered hidden Markov model ==