Viterbi algorithm: Difference between revisions

Content deleted Content added
MarkSweep (talk | contribs)
example with Python code for the Viterbi algorithm
MarkSweep (talk | contribs)
forward algorithm, rephrased
Line 1:
The '''Viterbi algorithm''', named after its developer [[Andrew Viterbi]], is ana [[dynamic programming]] [[algorithm]] for finding the most [[likelihood|likely]] sequence of hidden states (or– causes)known as the '''Viterbi path''' – that result in a sequence of observed events. It was originally conceived as an [[error-correction]] scheme for noisy digital communication links, finding universal applicationespecially in decoding the [[convolutionalcontext code]]s used inof [[CDMA]]hidden andMarkov [[GSMmodel]] digital cellular, dial modems, satellite and deep-space communications, and [[802s.11]] wireless LANs.The It'''forward is now also commonly used in [[information theory]], [[speech recognition]], [[computational linguistics]], and [[bioinformatics]]. For example, in speech-to-text speech recognition, the acoustic signalalgorithm''' is treated as the observed sequence of events, and a stringclosely ofrelated textalgorithm is considered tofor becomputing the "hiddentotal cause"probability of thea acoustic signal. The Viterbi algorithm finds the most likely stringsequence of textobserved given the acoustic signalevents.
 
The Viterbi algorithm was originally conceived as an [[error-correction]] scheme for noisy digital communication links, finding universal application in decoding the [[convolutional code]]s used in [[CDMA]] and [[GSM]] digital cellular, dial modems, satellite and deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[information theory]], [[speech recognition]], [[computational linguistics]], and [[bioinformatics]]. For example, in speech-to-text speech recognition, the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
The algorithm is not general; it makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an observed event needs to correspond to exactly one hidden event. Third, computing the most likely hidden sequence up to a certain point t must only depend on the observed event at point t, and the most likely sequence at point t-1.
 
The algorithm is not general; it makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an observed event needs to correspond to exactly one hidden event. Third, computing the most likely hidden sequence up to a certain point t must only depend on the observed event at point t, and the most likely sequence at point t-1. These assumptions are all satisfied in a first-order hidden Markov model.
More technically, the Viterbi algorithm is a [[dynamic programming]] algorithm to find the hidden sequence of observations given an observed sequence of observations in a [[hidden Markov model]]. The resulting path is called the Viterbi path.
 
Recently, theThe terms "Viterbi path" (or more generally "Viterbi ''foo''" if "path" is not appropriate) and "Viterbi algorithm" haveare beenalso applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, suchin asstochastic [[parser|parsing]] a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, aswhich wellis sometimes called the "Viterbi parse".