Content deleted Content added
Gzornenplatz (talk | contribs) names |
Gzornenplatz (talk | contribs) mNo edit summary |
||
Line 1:
'''Iterative Viterbi Decoding''' is an algorithm that spots the subsequence S of an observation O={o<sub>1</sub>,...,o<sub>n</sub>} having the highest average probability (i.e., probability scaled by the length of S) of being generated by a given Hidden Markov Model M with m states. The algorithm uses a modified [[Viterbi algorithm]] as an internal step.
The scaled probability measure was first proposed by [[John S. Bridle]]. An early algorithm to solve this problem, [[sliding window]], was proposed by [[Jay G. Wilpon]] et al., 1989, with constant cost T=mn<sup>2</sup>/2.
Line 42:
== History ==
The algorithm is the result of an insomnia, a couple of nights prior to an exam in July 1998 for a "Speech Recognition" class attended at EPFL (taught by
An extension for NLP was discovered by Antoine Rozenknop, during a presentation given by Silaghi at LIA (EPFL) in 2000.
|