Content deleted Content added
→References: Misspelled name. |
m Bayes' theorem (via WP:JWB) |
||
(9 intermediate revisions by 4 users not shown) | |||
Line 1:
Recognised by [[John Wozencraft]], '''
▲'''Sequential decoding''' is a limited memory technique for decoding [[tree codes]]. Sequential decoding is mainly used is as an approximate decoding algorithm for long constraint-length [[convolutional code]]s. This approach may not be as accurate as the [[Viterbi algorithm]] but can save a substantial amount of computer memory.
Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.
There is a range of sequential decoding approaches based on the choice of metric and algorithm. Metrics include:
*Fano metric
*Zigangirov metric
Line 17 ⟶ 16:
Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after [[Robert Fano]]) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory).
For a [[binary symmetric channel]] (with error probability <math>p</math>) the Fano metric can be derived via [[Bayes' theorem]]. We are interested in following the most likely path <math>P_i</math> given an explored state of the tree <math>X</math> and a received sequence <math>{\mathbf r}</math>. Using the language of [[probability]] and [[Bayes' theorem]] we want to choose the maximum over <math>i</math> of:
:<math>\Pr(P_i|X,{\mathbf r}) \propto \Pr({\mathbf r}|P_i,X)\Pr(P_i|X)</math>
Line 26 ⟶ 25:
*<math>n_i</math> to be the length of <math>P_i</math> in branches.
We express the [[likelihood]] <math>\Pr({\mathbf r}|P_i,X)</math> as <math>p^{d_i} (1-p)^{n_ib-d_i} 2^{-(N-n_i)b}</math> (by using the binary symmetric channel likelihood for the first <math>n_ib</math> bits followed by a uniform prior over the remaining bits).
We express the [[prior probability|prior]] <math>\Pr(P_i|X)</math> in terms of the number of branch choices one has made, <math>n_i</math>, and the number of branches from each node, <math>2^{Rb}</math>.
Therefore:
Line 50 ⟶ 49:
==Computational cutoff rate==
For sequential decoding to be a good choice of decoding algorithm, the number of states explored
:<math>R_0 = 1-\log_2(1+2\sqrt{p(1-p)})</math>
Line 66 ⟶ 65:
# At each decoding stage, the Fano algorithm retains the information regarding three paths: the current path, its immediate predecessor path, and one of its successor paths.
# Based on this information, the Fano algorithm can move from the current path to either its immediate predecessor path or the selected successor path; hence, no stack is required for queuing all examined paths.
# The movement of the Fano algorithm is guided by a dynamic threshold {{var|T}} that is an integer multiple of a fixed step size
# Only the path whose path metric is no less than {{var|T}} can be next visited. According to the algorithm, the process of codeword search continues to move forward along a code path, as long as the Fano metric along the code path remains non-decreasing.
# Once all the successor path metrics are smaller than {{var|T}}, the algorithm moves backward to the predecessor path if the predecessor path metric beats {{var|T}}; thereafter, threshold examination will be subsequently performed on another successor path of this revisited predecessor.
# In case the predecessor path metric is also less than {{var|T}}, the threshold {{var|T}} is one-step lowered so that the algorithm is not trapped on the current path.
# For the Fano algorithm, if a path is revisited, the presently examined dynamic threshold is always lower than the momentary dynamic threshold at the previous visit, guaranteeing that looping in the algorithm does not occur, and that the algorithm can ultimately reach a terminal node of the code tree, and stop.
|