![]() | Robotics Start‑class Mid‑importance | |||||||||
|
![]() | Computing Start‑class | |||||||||
|
--- The description is very hard to follow. It is incomplete, being based upon and making reference to a book that is not available online. The one-dimensional vectors are written wrongly in the matrix formulas (horizontal instead of vertical). IMO, this article needs a fundamental re-writing, it is extremely frustrating to read as it is.
--- I am not comfortable making the edits, but I believe there are major errors here.
With respect to the backward messages b. It is stated:
Finally note that the description in Russel & Norvig 2003 pp. 550 excludes the point product, thought the procedure is required earlier.
I don't believe this is correct. It should be as in the book:
b_{k+1:t} = TO_{k+1}b_{k+2:t}
no point product with f, no normalization (this is a probability, NOT a distribution; it should certainly not sum to 1). This is only the backward message, implements the updates shown on pp. 545 eq (15.7).
You multiply the b with the corresponding f in each time slice to obtained the smoothed vectors sv. So as written, there are two multiplications by f taking place! --- --- —Preceding unsigned comment added by Mihai preda (talk • contribs) 16:22, 15 July 2008 (UTC) The word 'umbrella' appears suddenly in the middle of the example. What is this t=1 'umbrella'? --- —Preceding unsigned comment added by 134.174.140.104 (talk) 14:58, 3 July 2008 (UTC) I am afraid this isn't complete? Forward-backward approach is to avoid the time complexity of the brute force one, but the method really isn't elaborated here. --huggie
Also, the time complexity formula should omit any constant factors (ie, the 2). The forward-backward algorithm itself just exploits the Markov property to reduce the time complexity to something like where is the number of symbols in the alphabet and is the length of the sequence. Some rough pseudo-code is below:
ForwardBackward(guessState, sequenceIndex):
if sequenceIndex is past the end of the sequence, return 1
if (guessState, sequenceIndex) has been seen before, return saved result
result = 0
for each neighboring state n:
result = result + (transition probability from guessState to n given observation element at sequenceIndex)*ForwardBackward(n, sequenceIndex+1)
save result for (guessState, sequenceIndex)
return result
The initial call can either be done by creating a loop and calling with all initial probabilities, or creating a dummy start state with transition probabilities equal to the initial probabilities and calling using that. Mskinn99 (talk) 21:04, 13 January 2008 (UTC)
Viterbi algorithm has better pseudo-code, and a better description. The two algorithms are so similar (they can both be implemented in a single function) that it might be worth merging the articles. Mskinn99 (talk) 23:24, 13 January 2008 (UTC)
I extended the page with a brief overview a semi formal description of the algorithm, referenced some performance improvements and included an example. I personally feel that the matrix based description is best to follow. Additionally I felt that it is most helpful to understand the workings of this algorithm through a numerical example. Therefore I included a (rather extensive) example based on an example presented in the widely used AI text book of Russel and Norvig. I also referenced a repository of source code where java code implementing the algorithm may be found. I am generally new to editing at Wikipedia but tried to follow the guidelines I found to be relevant. If the content is not found suitable, contains errors or is otherwise unsuitable, I certainly welcome feedback, critique and any edits, including deletions, deemed necessary :) BJJV (talk) 11:08, 26 May 2008 (UTC)