Content deleted Content added
m Standard headings &/or gen fixes. using AWB |
→The Layered Hidden Markov Model: Removing clichés per WP:WTA |
||
Line 17:
<math>\mathbf{o}_L=\{o_1, o_2, ..., o_{T_L}\}</math> can be used to classify the input into one of <math>K_L</math> classes, where each class corresponds to each of the <math>K_L</math> HMMs at level <math>L</math>. This classification can then be used to generate a new observation for the level <math>L-1</math> HMMs. At the lowest layer, i.e. level <math>N</math>, primitive observation symbols <math>\mathbf{o}_p=\{o_1, o_2, ..., o_{T_p}\}</math> would be generated directly from observations of the modeled process. For example in a trajectory tracking task the primitive observation symbols would originate from the quantized sensor values. Thus at each layer in the LHMM the observations originate from the classification of the underlying layer, except for the lowest layer where the observation symbols originate from measurements of the observed process.
It
Instead of simply using the winning HMM at level <math>L+1</math> as an input symbol for the HMM at level <math>L</math> it is possible to use it as a [[probability generator]] by passing the complete [[probability distribution]] up the layers of the LHMM. Thus instead of having a "winner takes all" strategy where the most probable HMM is selected as an observation symbol, the likelihood <math>L(i)</math> of observing the <math>i</math>th HMM can be used in the recursion formula of the level <math>L</math> HMM to account for the uncertainty in the classification of the HMMs at level <math>L+1</math>. Thus, if the classification of the HMMs at level <math>n+1</math> is uncertain, it is possible to pay more attention to the a-priori information encoded in the HMM at level <math>L</math>.
==See also==
|