Viterbi algorithm: Difference between revisions

Content deleted Content added
Rescuing 4 sources and tagging 0 as dead.) #IABot (v2.0.9.5) (AManWithNoPlan - 19964
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0.9.5
 
(10 intermediate revisions by 8 users not shown)
Line 2:
{{Technical|date=September 2023}}
 
The '''Viterbi algorithm''' is a [[dynamic programming]] [[algorithm]] forthat obtainingfinds the [[Maximummost alikely posteriorisequence estimation|maximumof ahidden posteriorievents probabilitythat estimate]]would ofexplain thea mostsequence [[likelihoodof function|likely]]observed sequenceevents. The result of hiddenthe algorithm is often states—calledcalled the '''Viterbi path'''—that. resultsIt inis most commonly used with [[hidden Markov model]]s (HMMs). For example, if a sequencedoctor ofobserves a patient's symptoms over several days (the observed events.), Thisthe isViterbi donealgorithm especiallycould indetermine the contextmost probable sequence of [[Markovunderlying informationhealth source]]sconditions and(the [[hidden Markovevents) model]]sthat (HMM)caused those symptoms.
 
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[CDMA]] and [[GSM]] digital cellular, [[dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"] {{Webarchive|url=https://web.archive.org/web/20160512200056/http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf |date=2016-05-12 }}, retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For example, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
 
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[Code-division multiple access|CDMA]] and [[GSM]] digital cellular, [[Dial-up Internet access|dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[Speaker diarisation|diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"] {{Webarchive|url=https://web.archive.org/web/20160512200056/http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf |date=2016-05-12 }}, retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For exampleinstance, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acousticthat signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
== History ==
The Viterbi algorithm is named after [[Andrew Viterbi]], who proposed it in 1967 as a decoding algorithm for [[Convolution code|convolutional codes]] over noisy digital communication links.<ref>[https://arxiv.org/abs/cs/0504020v2 29 Apr 2005, G. David Forney Jr: The Viterbi Algorithm: A Personal History]</ref> It has, however, a history of [[multiple invention]], with at least seven independent discoveries, including those by Viterbi, [[Needleman–Wunsch algorithm|Needleman and Wunsch]], and [[Wagner–Fischer algorithm|Wagner and Fischer]].<ref name="slp">{{cite book |author1=Daniel Jurafsky |author2=James H. Martin |title=Speech and Language Processing |publisher=Pearson Education International |page=246}}</ref><!-- Jurafsky and Martin specifically refer to the papers that presented the Needleman–Wunsch and Wagner–Fischer algorithms, hence the wikilinks to those--> It was introduced to [[natural language processing]] as a method of [[part-of-speech tagging]] as early as 1987.
Line 27 ⟶ 26:
\end{cases}
</math>
The formula for <math>Q_{t,s}</math> is identical for <math>t>0</math>, except that <math>\max</math> is replaced with [[Arg max|<math>\arg\max</math>]], and <math>Q_{0,s} = 0</math>.
The Viterbi path can be found by selecting the maximum of <math>P</math> at the final timestep, and following <math>Q</math> in reverse.
 
Line 85 ⟶ 84:
A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day.
 
Firstly, the probabilities of being healthy or having a fever on the first day are calculated. GivenThe probability that thea patient reportswill feelingbe normal,healthy on the probabilityfirst thatday theyand werereport actuallyfeeling healthynormal is <math>0.6 \times 0.5 = 0.3</math>. Similarly, the probability that theya hadpatient will have a fever on the first day and report feeling normal is <math>0.4 \times 0.1 = 0.04</math>.
 
The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day, and reporting to be cold, following reporting being normal on the first day, is the maximum of <math>0.3 \times 0.7 \times 0.4 = 0.084</math> and <math>0.04 \times 0.4 \times 0.4 = 0.0064</math>. This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering.
 
The rest of the probabilities are summarised in the following table:
Line 105 ⟶ 104:
|}
 
From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.01512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day.
 
The operation of Viterbi's algorithm can be visualized by means of a [[Trellis diagram#Trellis diagram|trellis diagram]]. The Viterbi path is essentially the shortest path through this trellis.
Line 169 ⟶ 168:
* [https://hackage.haskell.org/package/hmm-0.2.1.1/docs/src/Data-HMM.html#viterbi Haskell]
* [https://github.com/nyxtom/viterbi Go]
* [http://tuvalu.santafe.edu/~simon/styled-8/ SFIHMM]{{Dead link|date=August 2025 |bot=InternetArchiveBot |fix-attempted=yes }} includes code for Viterbi decoding.
 
[[Category:Eponymous algorithms of mathematics]]