Viterbi algorithm: Difference between revisions

Content deleted Content added
Parenthetical comment is not necessary, as examples are already given with "Viterbi path" and "Viterbi algorithm". (Plus, it might confuse people not familiar with the metaword "foo".)
Line 5:
The algorithm is not general; it makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an observed event needs to correspond to exactly one hidden event. Third, computing the most likely hidden sequence up to a certain point ''t'' must only depend on the observed event at point ''t'', and the most likely sequence at point ''t'' − 1. These assumptions are all satisfied in a first-order hidden Markov model.
 
The terms "Viterbi path" (or more generally "Viterbi ''foo''" if "path" is not appropriate) and "Viterbi algorithm" are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in stochastic [[parser|parsing]] a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the "Viterbi parse".
 
==A concrete example==