Content deleted Content added
→Consistent notation across HMM articles: new section |
No edit summary |
||
(6 intermediate revisions by 4 users not shown) | |||
Line 1:
{{WikiProject
{{WikiProject
{{WikiProject Computing|importance=}}
}}
---
Line 131 ⟶ 133:
:Strongly agree. I work with matrix maths a lot, but it's obvious to me that the explanation given in this page is quite hard to understand. It seems to be oriented towards computational efficiency rather than clear exposition. The bit with the emission probabilities placed into a diagonal matrix in particular, really rams the point home that the layout is not so we can understand it! --[[User:Mcld|mcld]] ([[User talk:Mcld|talk]]) 11:05, 2 May 2012 (UTC)
I agree. There's an additional problem; the evidence states 00100 are a [[palindrome]], which is extremely confusing for an algorithm that is supposed to have separate forward & backward steps. Furthermore, the transition matrix is symmetric, which also makes the backwards step more unclear. Should you transpose it? [[User:Chrism2671|Chrism2671]] ([[User talk:Chrism2671|talk]]) 12:45, 6 April 2022 (UTC)
== Merge with BCJR algorithm ==
Line 144 ⟶ 148:
== Consistent notation across HMM articles ==
The notation used on this page is inconsistent with the notation used on the [[Hidden_Markov_model|HMM]], [[Forward_algorithm|forward algorithm]], and [[Viterbi_algorithm|Viterbi algorithm]] pages. Namely, in the latter cases, <math>x</math> represent the hidden states and <math>y</math> the observations, whereas here <math>o</math> represent the hidden states with <math>x</math> being the observations. I know that generally there is no standard notational convention across the many HMM papers, but for this set of articles, it would be much less confusing to stick with a consistent notation. Thoughts? <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:OhGodItsSoAmazing|OhGodItsSoAmazing]] ([[User talk:OhGodItsSoAmazing|talk]] • [[Special:Contributions/OhGodItsSoAmazing|contribs]]) 19:32, 7 December 2013 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
== Request for efficient higher order extension ==
I've been looking for a reference that describes the efficient extension when using an order greater than one. I know the basic trick is to take (Q x Q) --> Q', then use this space which maps from [q_k at t-1 and q_l at t] to [q_j at t and q_i at t+1]. Clearly q_l=q_j. This causes many zeros in the transition probability matrix.
Without modifying the algorithm, time is wasted on these zeros. A second order HMM implemented naively has a run time of O(N^4 T). But if we squeeze the zeros out in the algorithm, then this reduces to O(N^3 T). A third order HMM implemented naively runs in O(N^6 T), but squeezing the zeros out to avoid the algorithm running over them would run in O(N^4 T). I've done some work myself, but without references it would count as original research.
I'll provide one possible extension in the forward pass below (using the forward algorithm listed here in non-matrix form http://www.cs.sjsu.edu/~stamp/RUA/HMM.pdf):
f0:1(i) = pi(i)*b_i(O_1)
f0:2(j,i) = pi(j,i)*b_{j,i}(O_2), using pi(j,i) as the joint prior probability of state q_j followed by state q_i. Alternatively
f0:2(j,i) = f0:1(j) * T{j}{i} * b_{j,i}(O_2), but here T{j,i} is an element of the first-order transition matrix
f0:t(j,i) = sum_{l,k}^{N} f0:{t-1}(k,l) * T_{k,l}{j,i} * B{j,i}(O_t}, but q_l=q_j. Squeezing the zeros out give:
f0:t(j,i) = sum_{k}^{N} f0:{t-1}(k,j) * T_{k,j}{j,i} * B{j,i}(O_t}, for i and j in 1, 2, ..., N
Mouse7mouse9 20:20, 28 January 2014 (UTC) <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Mouse7mouse9|Mouse7mouse9]] ([[User talk:Mouse7mouse9|talk]] • [[Special:Contributions/Mouse7mouse9|contribs]]) </span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
== Forward Probabilities ==
There is already a page on Forward Algorithm. It would be better to reference that instead of repeating. The section on backward probabilities starts:
A similar procedure can be constructed to find backward probabilities.
referring the section on forward probabilities, forcing the reader who's already read the article on the Forward Algorithm to go review above section to ensure they understand the reference ...
|