Content deleted Content added
Added computational cutoff rate |
GoingBatty (talk | contribs) Typo fixing, typos fixed: ie
→ i.e.
, eg → e.g. (2), more more → more, removed stub tag using AWB (7351) |
||
Line 1:
{{cleanup|date=December 2009}}
{{Expert-subject|Mathematics|date=December 2009}}
'''Sequential decoding''' is a limited memory technique for decoding [[tree codes]]. Sequential decoding is mainly used is as an approximate decoding algorithm for long constraint-length [[
Sequential decoding explores the tree code in such a way to try and minimise the computational cost and memory requirements to store the tree.
Line 16:
==Fano metric==
Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after [[Robert Fano]]) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (
For a [[binary symmetric channel]] (with error probability <math>p</math>) the Fano metric can be derived via [[Bayes theorem]]. We are interested in following the most likely path <math>P_i</math> given an explored state of the tree <math>X</math> and a received sequence <math>{\mathbf r}</math>. Using the language of [[probability]] and [[Bayes theorem]] we want to choose the maximum over <math>i</math> of:
Line 23:
We now introduce the following notation:
*<math>N</math> to represent the maximum length of transmission in branches
*<math>b</math> to represent the number of bits on a branch of the code (the denominator of the [[code rate]], <math>R</math>).
*<math>d_i</math> to represent the number of bit errors on path <math>P_i</math> (the [[Hamming distance]] between the branch labels and the received sequence)
*<math>n_i</math> to be the length of <math>P_i</math> in branches.
We express the [[likelihood]] <math>\Pr({\mathbf r}|P_i,X)</math> as <math>p^{d_i} (1-p)^{n_ib-d_i} 2^{-(N-n_i)b}</math> (by using the binary symmetric channel likelihood for the first <math>n_ib</math> bits followed by a uniform prior over the remaining bits).
Line 39:
</math>
We can equivalently maximise the log of this probability,
:<math>
\begin{align}
Line 51:
==Computational cutoff rate==
For sequential decoding to a good choice of decoding algorithm, the number of states explored wants to remain small (otherwise an algorithm which deliberately explores all states,
:<math>R_0 = 1-\log_2(1+2\sqrt{p(1-p)})</math>
==Algorithms==
===Stack algorithm===
The simplest algorithm to describe is the "stack algorithm" in which the best <math>N</math> paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has <math>N</math> or
===Fano algorithm===
Line 70 ⟶ 69:
==External links==
*"[http://demonstrations.wolfram.com/CorrectionTrees/ Correction trees]" - simulator of correction process using priority queue to choose maximum metric node (called weight)
{{DEFAULTSORT:Sequential Decoding}}
[[Category:Error detection and correction]]
|