Viterbi algorithm: Difference between revisions

Content deleted Content added
Tags: Reverted Visual edit
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0.9.5
 
(3 intermediate revisions by 3 users not shown)
Line 2:
{{Technical|date=September 2023}}
 
The '''Viterbi algorithm''' is a [[dynamic programming]] [[algorithm]] forthat obtainingfinds the [[Maximummost alikely posteriorisequence estimation|maximumof ahidden posteriorievents probabilitythat estimate]]would ofexplain thea mostsequence [[likelihoodof function|likely]]observed sequenceevents. The result of hiddenthe algorithm is often states—calledcalled the '''Viterbi path'''—that. resultsIt inis most commonly used with [[hidden Markov model]]s (HMMs). For example, if a sequencedoctor ofobserves a patient's symptoms over several days (the observed events.), Thisthe isViterbi donealgorithm especiallycould indetermine the contextmost probable sequence of [[Markovunderlying informationhealth source]]sconditions and(the [[hidden Markovevents) model]]sthat (HMM)caused those symptoms.
 
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[CDMA]] and [[GSM]] digital cellular, [[dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"] {{Webarchive|url=https://web.archive.org/web/20160512200056/http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf |date=2016-05-12 }}, retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For example, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
 
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[Code-division multiple access|CDMA]] and [[GSM]] digital cellular, [[Dial-up Internet access|dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[Speaker diarisation|diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"] {{Webarchive|url=https://web.archive.org/web/20160512200056/http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf |date=2016-05-12 }}, retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For exampleinstance, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acousticthat signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
== History ==
The Viterbi algorithm is named after [[Andrew Viterbi]], who proposed it in 1967 as a decoding algorithm for [[Convolution code|convolutional codes]] over noisy digital communication links.<ref>[https://arxiv.org/abs/cs/0504020v2 29 Apr 2005, G. David Forney Jr: The Viterbi Algorithm: A Personal History]</ref> It has, however, a history of [[multiple invention]], with at least seven independent discoveries, including those by Viterbi, [[Needleman–Wunsch algorithm|Needleman and Wunsch]], and [[Wagner–Fischer algorithm|Wagner and Fischer]].<ref name="slp">{{cite book |author1=Daniel Jurafsky |author2=James H. Martin |title=Speech and Language Processing |publisher=Pearson Education International |page=246}}</ref><!-- Jurafsky and Martin specifically refer to the papers that presented the Needleman–Wunsch and Wagner–Fischer algorithms, hence the wikilinks to those--> It was introduced to [[natural language processing]] as a method of [[part-of-speech tagging]] as early as 1987.
Line 47 ⟶ 46:
'''for''' '''each''' state s '''in''' states '''do'''
'''for''' '''each''' state r '''in''' states '''do'''
new_prob ← prob[t - 1]t[r] * trans[r][s] * emit[s][obs[t]]
'''if''' new_prob > prob[t][s] '''then'''
prob[t][s] ← new_prob
Line 169 ⟶ 168:
* [https://hackage.haskell.org/package/hmm-0.2.1.1/docs/src/Data-HMM.html#viterbi Haskell]
* [https://github.com/nyxtom/viterbi Go]
* [http://tuvalu.santafe.edu/~simon/styled-8/ SFIHMM]{{Dead link|date=August 2025 |bot=InternetArchiveBot |fix-attempted=yes }} includes code for Viterbi decoding.
 
[[Category:Eponymous algorithms of mathematics]]