Content deleted Content added
→Pseudocode: Separated algorithm description/explanation and pseudocode implementation. |
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0.9.5 |
||
(22 intermediate revisions by 18 users not shown) | |||
Line 2:
{{Technical|date=September 2023}}
The '''Viterbi algorithm''' is a [[dynamic programming]] [[algorithm]]
The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[CDMA]] and [[GSM]] digital cellular, [[dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is now also commonly used in [[speech recognition]], [[speech synthesis]], [[diarization]],<ref>Xavier Anguera et al., [http://www1.icsi.berkeley.edu/~vinyals/Files/taslp2011a.pdf "Speaker Diarization: A Review of Recent Research"], retrieved 19. August 2010, IEEE TASLP</ref> [[keyword spotting]], [[computational linguistics]], and [[bioinformatics]]. For example, in [[speech-to-text]] (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.▼
▲The algorithm has found universal application in decoding the [[convolutional code]]s used in both [[Code-division multiple access|CDMA]] and [[GSM]] digital cellular, [[Dial-up Internet access|dial-up]] modems, satellite, deep-space communications, and [[802.11]] wireless LANs. It is
== History ==
The Viterbi algorithm is named after [[Andrew Viterbi]], who proposed it in 1967 as a decoding algorithm for [[Convolution code|convolutional codes]] over noisy digital communication links.<ref>[https://arxiv.org/abs/cs/0504020v2 29 Apr 2005, G. David Forney Jr: The Viterbi Algorithm: A Personal History]</ref> It has, however, a history of [[multiple invention]], with at least seven independent discoveries, including those by Viterbi, [[Needleman–Wunsch algorithm|Needleman and Wunsch]], and [[Wagner–Fischer algorithm|Wagner and Fischer]].<ref name="slp">{{cite book |author1=Daniel Jurafsky |author2=James H. Martin |title=Speech and Language Processing |publisher=Pearson Education International |page=246}}</ref><!-- Jurafsky and Martin specifically refer to the papers that presented the Needleman–Wunsch and Wagner–Fischer algorithms, hence the wikilinks to those--> It was introduced to [[natural language processing]] as a method of [[part-of-speech tagging]] as early as 1987.
''Viterbi path'' and ''Viterbi algorithm'' have become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.<ref name="slp" />
For example, in
== Algorithm ==
Given a hidden Markov model with a set of hidden states <math>S</math> and a sequence of <math>T</math> observations <math>o_0, o_1, \dots, o_{T-1}</math>, the Viterbi algorithm finds the most likely sequence of states that could have produced those observations. At each time step <math>t</math>, the algorithm solves the subproblem where only the observations up to <math>o_t</math> are considered.
Two matrices of size <math>T \times \left|{S}\right|</math> are constructed:
* <math>P_{t,s}</math> contains the maximum probability of ending up at state <math>s</math> at observation <math>t</math>, out of all possible
* <math>Q_{t,s}</math> tracks the previous state that was used before <math>s</math> in this maximum probability
Let <math>\pi_s</math> and <math>a_{r,s}</math> be the initial and transition probabilities respectively, and let <math>b_{s,o}</math> be the probability of observing <math>o</math> at state <math>s</math>. Then the values of <math>P</math> are given by the recurrence relation<ref>Xing E, slide 11.</ref>
Line 23 ⟶ 22:
P_{t,s} =
\begin{cases}
\
\max_{r \in S} \left(P_{t-1,r} \cdot a_{r,s} \cdot b_{s,o_t} \right) & \text{if } t>0.
\end{cases}
</math>
The formula for <math>Q_{t,s}</math> is identical for <math>t>0</math>, except that <math>\max</math> is replaced with [[Arg max|<math>\arg\max</math>]], and <math>Q_{0,s} = 0</math>.
The Viterbi path can be found by selecting the maximum of <math>P</math> at the final timestep, and following <math>Q</math> in reverse.
== Pseudocode ==
Line 41 ⟶ 42:
'''for''' '''each''' state s '''in''' states '''do'''
prob[0][s] = init[s] * emit[s][obs[0]]
'''for''' t = 1 '''to''' T - 1 '''inclusive do''' ''// t = 0 has been dealt with already''
'''for''' '''each''' state s '''in''' states '''do'''
'''for''' '''each''' state r '''in''' states '''do'''
Line 48 ⟶ 49:
'''if''' new_prob > prob[t][s] '''then'''
prob[t][s] ← new_prob
prev[t][s] ←
path ← empty array of length T
path[T - 1] ← the state s with maximum prob[T - 1][s]
'''for''' t = T -
path[t
'''return''' path
'''end'''
Line 69 ⟶ 67:
The ''observations'' (normal, cold, dizzy) along with the ''hidden'' states (healthy, fever) form a hidden Markov model (HMM). From past experience, the probabilities of this model have been estimated as:
<pre>
init = {"Healthy": 0.6, "Fever": 0.4}
trans = {
Line 79 ⟶ 77:
"Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
}
</pre>
In this code, <code>init</code> represents the doctor's belief about how likely the patient is to be healthy initially. Note that the particular probability distribution used here is not the equilibrium one, which would be <code>{'Healthy': 0.57, 'Fever': 0.43}</code> according to the transition probabilities. The transition probabilities <code>trans</code> represent the change of health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emission probabilities <code>emit</code> represent how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy.
Line 86 ⟶ 84:
A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day.
Firstly, the probabilities of being healthy or having a fever on the first day are calculated.
The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of <math>0.3 \times 0.7 \times 0.4 = 0.084</math> and <math>0.04 \times 0.4 \times 0.4 = 0.0064</math>. This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering.
The rest of the probabilities are summarised in the following table:
Line 103 ⟶ 101:
|-
! Fever
| 0.04 || 0.027 || '''0.
|}
From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.
The operation of Viterbi's algorithm can be visualized by means of a [[Trellis diagram#Trellis diagram|trellis diagram]]. The Viterbi path is essentially the shortest path through this trellis.
Line 147 ⟶ 145:
* {{cite book |vauthors=Feldman J, Abou-Faycal I, Frigo M |chapter=A fast maximum-likelihood decoder for convolutional codes |title=Proceedings IEEE 56th Vehicular Technology Conference |volume=1 |pages=371–375 |year=2002 |doi=10.1109/VETECF.2002.1040367|isbn=978-0-7803-7467-6 |citeseerx=10.1.1.114.1314 |s2cid=9783963 }}
* {{cite journal |doi=10.1109/PROC.1973.9030 |author=Forney GD |title=The Viterbi algorithm |journal=Proceedings of the IEEE |volume=61 |issue=3 |pages=268–278 |date=March 1973 }} Subscription required.
* {{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | ___location=New York | isbn=978-0-521-88068-8 | chapter=Section 16.2. Viterbi Decoding | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=850 | access-date=2011-08-17 | archive-date=2011-08-11 | archive-url=https://web.archive.org/web/20110811154417/http://apps.nrbook.com/empanel/index.html#pg=850 | url-status=dead }}
* {{cite journal |author=Rabiner LR |title=A tutorial on hidden Markov models and selected applications in speech recognition |journal=Proceedings of the IEEE |volume=77 |issue=2 |pages=257–286 |date=February 1989 |doi=10.1109/5.18626|citeseerx=10.1.1.381.3454 |s2cid=13618539 }} (Describes the forward algorithm and Viterbi algorithm for HMMs).
* Shinghal, R. and [[Godfried Toussaint|Godfried T. Toussaint]], "Experiments in text recognition with the modified Viterbi algorithm," ''IEEE Transactions on Pattern Analysis and Machine Intelligence'', Vol. PAMI-l, April 1979, pp. 184–193.
Line 163 ⟶ 161:
* [https://github.com/xukmin/viterbi C++]
* [http://pcarvalho.com/forward_viterbi/ C#]
* [http://www.cs.stonybrook.edu/~pfodor/viterbi/Viterbi.java Java] {{Webarchive|url=https://web.archive.org/web/20140504055101/http://www.cs.stonybrook.edu/~pfodor/viterbi/Viterbi.java |date=2014-05-04 }}
* [https://adrianulbona.github.io/hmm/ Java 8]
* [https://juliahub.com/ui/Packages/HMMBase/8HxY5/ Julia (HMMBase.jl)]
* [https://metacpan.org/module/Algorithm::Viterbi Perl]
* [http://www.cs.stonybrook.edu/~pfodor/viterbi/viterbi.P Prolog] {{Webarchive|url=https://web.archive.org/web/20120502010115/http://www.cs.stonybrook.edu/~pfodor/viterbi/viterbi.P |date=2012-05-02 }}
* [https://hackage.haskell.org/package/hmm-0.2.1.1/docs/src/Data-HMM.html#viterbi Haskell]
* [https://github.com/nyxtom/viterbi Go]
* [http://tuvalu.santafe.edu/~simon/styled-8/ SFIHMM]{{Dead link|date=August 2025 |bot=InternetArchiveBot |fix-attempted=yes }} includes code for Viterbi decoding.
[[Category:Eponymous algorithms of mathematics]]
|