Content deleted Content added
DavidCBryant (talk | contribs) →Background: Idiom and style. |
Link suggestions feature: 1 link added. |
||
(20 intermediate revisions by 17 users not shown) | |||
Line 1:
{{Short description|Multilevel, non-directly observable 'probability engine'}}
A layered hidden Markov model consists of ''N'' levels of HMMs, where the HMMs on level ''i'' + 1 correspond to observation symbols or probability generators at level ''i''. Every level ''i'' of the LHMM consists of ''K''<sub>''i''</sub> HMMs running in parallel.<ref>N. Oliver, A. Garg and E. Horvitz, "Layered Representations for Learning and Inferring Office Activity from Multiple Sensory Channels", Computer Vision and Image Understanding, vol. 96, p. 163–180, 2004.
</ref>
== Background ==
LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough [[Training, validation, and test data sets|training data]] were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because
== The
A layered hidden Markov model (LHMM) consists of <math>N</math> levels of HMMs where the HMMs on level <math>N+1</math> corresponds to observation symbols or probability generators at level <math>N</math>.
Line 15 ⟶ 17:
At any given level <math>L</math> in the LHMM a sequence of <math>T_L</math> observation symbols
<math>\mathbf{o}_L=\{o_1, o_2,
It
Instead of simply using the winning HMM at level <math>L+1</math> as an input symbol for the HMM at level <math>L</math> it is possible to use it as a [[probability generator]] by passing the complete [[probability distribution]] up the layers of the LHMM. Thus instead of having a "winner takes all" strategy where the most probable HMM is selected as an observation symbol, the likelihood <math>L(i)</math> of observing the <math>i</math>th HMM can be used in the recursion formula of the level <math>L</math> HMM to account for the uncertainty in the classification of the HMMs at level <math>L+1</math>. Thus, if the classification of the HMMs at level <math>n+1</math> is uncertain, it is possible to pay more attention to the a-priori information encoded in the HMM at level <math>L</math>.
==
*[[Hierarchical hidden Markov model]]▼
▲[[Hierarchical hidden Markov model]]
== References ==
{{Reflist}}
▲[[Category:Machine learning]]
|