Viterbi algorithm: Difference between revisions

Content deleted Content added
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0.9.5
 
(18 intermediate revisions by 15 users not shown)
Line 2:
{{Technical|date=September 2023}}
 
The '''Viterbi algorithm''' is a [[dynamic programming]] [[algorithm]] forthat obtainingfinds the [[Maximummost alikely posteriorisequence estimation|maximumof ahidden posteriorievents probabilitythat estimate]]would ofexplain thea mostsequence [[likelihoodof function|likely]]observed sequenceevents. The result of hiddenthe algorithm is often states—calledcalled the '''Viterbi path'''—that. resultsIt inis most commonly used with [[hidden Markov model]]s (HMMs). For example, if a sequencedoctor ofobserves a patient's symptoms over several days (the observed events.), Thisthe isViterbi donealgorithm especiallycould indetermine the contextmost probable sequence of [[Markovunderlying informationhealth source]]sconditions and(the [[hidden Markovevents) model]]sthat (HMM)caused those symptoms.
 
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[CDMA]] and [[GSM]] digital cellular, [[dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"], retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For example, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
 
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[Code-division multiple access|CDMA]] and [[GSM]] digital cellular, [[Dial-up Internet access|dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[Speaker diarisation|diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"] {{Webarchive|url=https://web.archive.org/web/20160512200056/http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf |date=2016-05-12 }}, retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For exampleinstance, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acousticthat signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
== History ==
The Viterbi algorithm is named after [[Andrew Viterbi]], who proposed it in 1967 as a decoding algorithm for [[Convolution code|convolutional codes]] over noisy digital communication links.<ref>[https://arxiv.org/abs/cs/0504020v2 29 Apr 2005, G. David Forney Jr: The Viterbi Algorithm: A Personal History]</ref> It has, however, a history of [[multiple invention]], with at least seven independent discoveries, including those by Viterbi, [[Needleman–Wunsch algorithm|Needleman and Wunsch]], and [[Wagner–Fischer algorithm|Wagner and Fischer]].<ref name="slp">{{cite book |author1=Daniel Jurafsky |author2=James H. Martin |title=Speech and Language Processing |publisher=Pearson Education International |page=246}}</ref><!-- Jurafsky and Martin specifically refer to the papers that presented the Needleman–Wunsch and Wagner–Fischer algorithms, hence the wikilinks to those--> It was introduced to [[natural language processing]] as a method of [[part-of-speech tagging]] as early as 1987.
 
''Viterbi path'' and ''Viterbi algorithm'' have become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.<ref name="slp" />
For example, in [[statistical parsing]] a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse".<ref>{{Cite conference | doi = 10.3115/1220355.1220379| title = Efficient parsing of highly ambiguous context-free grammars with bit vectors| conference = Proc. 20th Int'l Conf. on Computational Linguistics (COLING)| pages = <!--162-->| year = 2004| last1 = Schmid | first1 = Helmut| url = http://www.aclweb.org/anthology/C/C04/C04-1024.pdf| doi-access = free}}</ref><ref>{{Cite conference| doi = 10.3115/1073445.1073461| title = A* parsing: fast exact Viterbi parse selection| conference = Proc. 2003 Conf. of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL)| pages = 40–47| year = 2003| last1 = Klein | first1 = Dan| last2 = Manning | first2 = Christopher D.| url = http://ilpubs.stanford.edu:8090/532/1/2002-16.pdf| doi-access = free}}</ref><ref>{{Cite journal | doi = 10.1093/nar/gkl200| title = AUGUSTUS: Ab initio prediction of alternative transcripts| journal = Nucleic Acids Research| volume = 34| issue = Web Server issue| pages = W435–W439| year = 2006| last1 = Stanke | first1 = M.| last2 = Keller | first2 = O.| last3 = Gunduz | first3 = I.| last4 = Hayes | first4 = A.| last5 = Waack | first5 = S.| last6 = Morgenstern | first6 = B. | pmid=16845043 | pmc=1538822}}</ref> Another application is in [[Optical motion tracking|target tracking]], where the track is computed that assigns a maximum likelihood to a sequence of observations.<ref>{{cite conference |author=Quach, T.; Farooq, M. |chapter=Maximum Likelihood Track Formation with the Viterbi Algorithm |title=Proceedings of 33rd IEEE Conference on Decision and Control |date=1994 |volume=1 |pages=271–276|doi=10.1109/CDC.1994.410918}}</ref>
 
== Algorithm ==
Line 27 ⟶ 26:
\end{cases}
</math>
The formula for <math>Q_{t,s}</math> is identical for <math>t>0</math>, except that <math>\max</math> is replaced with [[Arg max|<math>\arg\max</math>]], and <math>Q_{0,s} = 0</math>.
The Viterbi path can be found by selecting the maximum of <math>P</math> at the final timestep, and following <math>Q</math> in reverse.
 
Line 43 ⟶ 42:
'''for''' '''each''' state s '''in''' states '''do'''
prob[0][s] = init[s] * emit[s][obs[0]]
'''end'''
'''for''' t = 1 '''to''' T - 1 '''inclusive do''' ''// t = 0 has been dealt with already''
'''for''' '''each''' state s '''in''' states '''do'''
'''for''' '''each''' state r '''in''' states '''do'''
Line 51 ⟶ 50:
prob[t][s] ← new_prob
prev[t][s] ← r
'''end'''
'''end'''
'''end'''
'''end'''
path ← empty array of length T
path[T - 1] ← the state s with maximum prob[T - 1][s]
'''for''' t = T - 12 '''to''' 10 '''inclusive do'''
path[t - 1] ← prev[t + 1][path[t + 1]]
'''end'''
'''return''' path
'''end'''
Line 88 ⟶ 84:
A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day.
 
Firstly, the probabilities of being healthy or having a fever on the first day are calculated. GivenThe probability that thea patient reportswill feelingbe normal,healthy on the probabilityfirst thatday theyand werereport actuallyfeeling healthynormal is <math>0.6 \times 0.5 = 0.3</math>. Similarly, the probability that theya hadpatient will have a fever on the first day and report feeling normal is <math>0.4 \times 0.1 = 0.04</math>.
 
The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of <math>0.3 \times 0.7 \times 0.4 = 0.084</math> and <math>0.04 \times 0.4 \times 0.4 = 0.0064</math>. This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering.
 
The rest of the probabilities are summarised in the following table:
Line 105 ⟶ 101:
|-
! Fever
| 0.04 || 0.027 || '''0.151201512'''
|}
 
From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.151201512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day.
 
The operation of Viterbi's algorithm can be visualized by means of a [[Trellis diagram#Trellis diagram|trellis diagram]]. The Viterbi path is essentially the shortest path through this trellis.
Line 149 ⟶ 145:
* {{cite book |vauthors=Feldman J, Abou-Faycal I, Frigo M |chapter=A fast maximum-likelihood decoder for convolutional codes |title=Proceedings IEEE 56th Vehicular Technology Conference |volume=1 |pages=371–375 |year=2002 |doi=10.1109/VETECF.2002.1040367|isbn=978-0-7803-7467-6 |citeseerx=10.1.1.114.1314 |s2cid=9783963 }}
* {{cite journal |doi=10.1109/PROC.1973.9030 |author=Forney GD |title=The Viterbi algorithm |journal=Proceedings of the IEEE |volume=61 |issue=3 |pages=268–278 |date=March 1973 }} Subscription required.
* {{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | ___location=New York | isbn=978-0-521-88068-8 | chapter=Section 16.2. Viterbi Decoding | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=850 | access-date=2011-08-17 | archive-date=2011-08-11 | archive-url=https://web.archive.org/web/20110811154417/http://apps.nrbook.com/empanel/index.html#pg=850 | url-status=dead }}
* {{cite journal |author=Rabiner LR |title=A tutorial on hidden Markov models and selected applications in speech recognition |journal=Proceedings of the IEEE |volume=77 |issue=2 |pages=257–286 |date=February 1989 |doi=10.1109/5.18626|citeseerx=10.1.1.381.3454 |s2cid=13618539 }} (Describes the forward algorithm and Viterbi algorithm for HMMs).
* Shinghal, R. and [[Godfried Toussaint|Godfried T. Toussaint]], "Experiments in text recognition with the modified Viterbi algorithm," ''IEEE Transactions on Pattern Analysis and Machine Intelligence'', Vol. PAMI-l, April 1979, pp.&nbsp;184–193.
Line 165 ⟶ 161:
* [https://github.com/xukmin/viterbi C++]
* [http://pcarvalho.com/forward_viterbi/ C#]
* [http://www.cs.stonybrook.edu/~pfodor/viterbi/Viterbi.java Java] {{Webarchive|url=https://web.archive.org/web/20140504055101/http://www.cs.stonybrook.edu/~pfodor/viterbi/Viterbi.java |date=2014-05-04 }}
* [https://adrianulbona.github.io/hmm/ Java 8]
* [https://juliahub.com/ui/Packages/HMMBase/8HxY5/ Julia (HMMBase.jl)]
* [https://metacpan.org/module/Algorithm::Viterbi Perl]
* [http://www.cs.stonybrook.edu/~pfodor/viterbi/viterbi.P Prolog] {{Webarchive|url=https://web.archive.org/web/20120502010115/http://www.cs.stonybrook.edu/~pfodor/viterbi/viterbi.P |date=2012-05-02 }}
* [https://hackage.haskell.org/package/hmm-0.2.1.1/docs/src/Data-HMM.html#viterbi Haskell]
* [https://github.com/nyxtom/viterbi Go]
* [http://tuvalu.santafe.edu/~simon/styled-8/ SFIHMM]{{Dead link|date=August 2025 |bot=InternetArchiveBot |fix-attempted=yes }} includes code for Viterbi decoding.
 
[[Category:Eponymous algorithms of mathematics]]