Content deleted Content added
→Forward probabilities: Adding math tags where approriate |
Jrhaberstroh (talk | contribs) |
||
Line 24:
We transform the probability distributions related to a given [[hidden Markov model]] into matrix notation as follows.
The transition probabilities <math>\mathbf{P}(X_t\mid X_{t-1})</math> of a given random variable <math>X_t</math> representing all possible states in the hidden Markov model will be represented by the matrix <math>\mathbf{T}</math> where the
:<math>\mathbf{T} = \begin{pmatrix}
Line 40:
</math>
provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given
:<math>\mathbf{P}(O = j)=\sum_{i} \pi_i b_{i,j}</math>
This can be represented in matrix form by multiplying the state row-vector (<math>\mathbf{\pi}</math>) by an observation matrix (<math>\mathbf{O_j} = \mathrm{diag}(b_{*,o_j})</math>) containing only diagonal entries. Each entry is the probability of the observed event given each state. Continuing the above example, an observation of event 1 would be:
:<math>\mathbf{O_1} = \begin{pmatrix}
Line 52:
</math>
This allows us to calculate the unnormalized probabilities associated with transitioning to a new state
:<math>
\mathbf{
</math>
The probability vector that results contains entries indicating the unnormalized probability of transitioning to each state and observing the given event.
:<math>
\mathbf{f_{0:
</math>
This process can be carried forward with additional observations using:
:<math>
\mathbf{f_{0:t}} = \mathbf{f_{0:t-1}} \mathbf{T} \mathbf{O_{o(t)}}
</math>
Line 67 ⟶ 73:
:<math>
\mathbf{f_{0:t}}(i) = \mathbf{P}(o_1, o_2, \dots, o_t, X_t=x_i | \mathbf{\
</math>
Line 73 ⟶ 79:
:<math>
\mathbf{\hat{f}_{0:t}} = c_t^{-1}\ \mathbf{
</math>
|