Content deleted Content added
m link overfitting using Find link |
|||
Line 21:
Instead of simply using the winning HMM at level <math>L+1</math> as an input symbol for the HMM at level <math>L</math> it is possible to use it as a [[probability generator]] by passing the complete [[probability distribution]] up the layers of the LHMM. Thus instead of having a "winner takes all" strategy where the most probable HMM is selected as an observation symbol, the likelihood <math>L(i)</math> of observing the <math>i</math>th HMM can be used in the recursion formula of the level <math>L</math> HMM to account for the uncertainty in the classification of the HMMs at level <math>L+1</math>. Thus, if the classification of the HMMs at level <math>n+1</math> is uncertain, it is possible to pay more attention to the a-priori information encoded in the HMM at level <math>L</math>.
A LHMM could in practice be transformed into a single layered HMM where all the different models are concatenated together. Some of the advantages that may be expected from using the LHMM over a large single layer HMM is that the LHMM is less likely to suffer from
==See also==
|