Viterbi algorithm: Difference between revisions

Content deleted Content added
m warnfile Adding:fr
Kowey (talk | contribs)
Line 9:
==A concrete example==
 
{{HMM example}}
Assume you have a friend who lives far away and who you call daily to talk about what each of you did that day. Your friend has only three things he's interested in: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. You have no definite information about the weather where your friend lives, but you know general trends. Based on what he tells you he did each day, you try to guess what the weather must have been like.
 
You believe that the weather operates as a discrete [[Markov chain]]. There are two states, "Rainy" and "Sunny", but you cannot observe them directly, that is, they are ''hidden'' from you. On each day, there is a certain chance that your friend will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since your friend tells you about his activities, those are the ''observations''. The entire system is that of a hidden Markov model (HMM).
 
You know the general weather trends in the area and you know what your friend likes to do on average. In other words, the parameters of the HMM are known. In fact, you can write them down in the [[Python programming language]]:
 
states = ('Rainy', 'Sunny')
observations = ('walk', 'shop', 'clean')
start_probability = {'Rainy': 0.6, 'Sunny': 0.4}
transition_probability = {
'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3},
'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6},
}
emission_probability = {
'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5},
'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1},
}
 
In this fragment, <code>start_probability</code> refers to your uncertainty about which state the HMM is in when your friend first calls you (all you know is that it tends to be rainy on average). The <code>transition_probability</code> refers to the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The <code>emission_probability</code> tells you how likely your friend is to perform a certain activity on each day. If it's rainy, there is a 50% chance that he is cleaning his apartment; if it's sunny, there is a 60% chance that he will go outside for a walk.
 
You talk to your friend three days in a row and discover that on the first day he went for a walk, on the second day he went shopping, and on the third day he cleaned his apartment. You have two questions: What is the overall probability of this sequence of observations? And what is the most likely sequence of rainy/sunny days that would explain these observations? The first question is answered by the forward algorithm; the second by the Viterbi algorithm. These two algorithm are structurally so similar (in fact, they are both instances of the same abstract algorithm) that they can be implemented in a single function: