Generalized minimum-distance decoding: Difference between revisions

Content deleted Content added
Marked as dead end (Deadendmarker is an automated bot.)
Adding links
Line 1:
{{deadend|date=May 2011}}
 
==Introduction==
In [[coding theory]], Generalized Minimum Distance (GMD) Decoding provides an efficient [[algorithm]] for decoding [http://en.wikipedia.org/wiki/Concatenated_code [concatenated codescode]]s, which is based on using an errors-and-[http://en.wikipedia.org/wiki/[Erasure_code |erasures] decoder for the [http://en.wikipedia.org/wiki/Concatenated_code [outer code]].
A [http://en.wikipedia.org/wiki/[Decoding_methods |naive decoding algorithm]] for concatenated codes can not be an optimal way of decoding because it does not take into account the information that [http://en.wikipedia.org/wiki/[Maximum_likelihood_decoding |Maximum Likelihood Decoding (MLD)]] gives. In other words, in the naive algorithm, inner received codewords[[codeword]]s are treated the same regardless of the difference between their [http://en.wikipedia.org/wiki/Hamming_distance [hamming distancesdistance]]s. Intuitively, the outer decoder should place higher confidence in symbols whose inner encodings are close to the received word. [http://en.wikipedia.org/wiki/David_Forney [David Forney]] in 1966 devised a better algorithm called Generalized Minimum Distance (GMD) Decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of [http://en.wikipedia.org/wiki/[Soft-decision_decoder |soft decision decoders]]. We will present three versions of the GMD decoding algorithm. The first two will be [http://en.wikipedia.org/wiki/Randomized_algorithm [randomized algorithmsalgorithm]]s while the last one will be a [http://en.wikipedia.org/wiki/Deterministic_algorithm [deterministic algorithm]].
 
==Setup==
# Hamming distance : Given two vectors[[vector]]s <math>u, v\in\sum^n</math> the Hamming distance between u and v, denoted by <math>\Delta(u, v)</math>, is defined to be the number of positions in which u and v differ.
# Minimum distance : Let <math>C\subseteq\sum^n</math> be a code. The minimum distance of code C is defined to be <math>d = \min{\Delta(c_1, c_2)}</math> where <math>c_1 \ne c_2 \in C</math>
# Code concatenation : Given <math>m = (m_{1}, ..., m_{K}) \in [Q]^K</math>, consider two codes which we call outer code and inner code <math>C_{out} = [Q]K \rightarrow [Q]N, C_{in} : [q]k \rightarrow [q]n</math>, and their distances are <math>D</math> and <math>d</math>. A concatenated code can be achieved by <math>C_{out} \circ C_{in} (m) = (C_{in} (C_{out} (m)_1), \ldots, C_{in} (C_{out} (m)_N ))</math> where <math>C_{out}(m) = ((C_{out} (m)_1, \ldots, (m)_N ))</math>. Finally we will take <math>C_{out}</math> to be RS code, which has an errors and erasure decoder, and <math>K = O(\log{N})</math>, which in turn implies that MLD on the inner code will be poly(<math>N</math>) time.
# Maximum likelihood decoding(MLD) : MLD is a decoding method for error correcting codes, which outputs the codeword closest to the received word in Hamming distance. The MLD function denoted by <math>D_{MLD} : \sum^n \rightarrow C</math> is defined as follows. For every <math>y\in\sum_n</math>, <math>D_{MLD}(y) = \arg \min_{c \in C}\Delta(c, y)</math>.
# [[Probability density function]] : A [[probability distribution]] <math>\Pr[\bullet]</math> on a sample space <math>S</math> is a mapping from events of <math>S</math> to [[real numbersnumber]]s such that <math>\Pr[A] \ge 0</math> for any event <math>A</math>, <math>\Pr[S] = 1</math>, and <math>\Pr[A \cup B] = \Pr[A] + \Pr[B]</math> for any two mutually exclusive events <math>A</math> and <math>B</math>
# [[Expected value]] : The expected value of a [[discrete random variable]] <math>X</math> is <math>\mathbb{E} = \sum_x\Pr[X = x]</math>.
 
==Randomized algorithm==
Consider the received word <math>\mathbf{y} = (y_1,...,y_N) \in [q^n]^N</math> which corrupted by [http://en.wikipedia.org/wiki/Noisy_channel [noisy channel]]. The following is the algorithm description for the general case. In this algorithm, we can decode y by just declaring an erasure at every bad position and running the errors and erasure decoding algorithm for <math>C_{out}</math> on the resulting vector.
 
'''Randomized_Decoder'''
Line 23 ⟶ 22:
# Run errors and erasure algorithm for <math>C_{out}</math> on <math>\mathbf{y}^{\prime\prime} = (y_1^{\prime\prime}, \ldots, y_N^{\prime\prime})</math>.
 
'''Theorem 1.''' ''Let y be a received word such that there exists a [http://en.wikipedia.org/wiki/Codeword [codeword]]'' <math>\mathbf{c} = (c_1,..., c_N) \in C_{out}\circ{C_{in}} \subseteq [q^n]^N</math> ''such that'' <math>\Delta(\mathbf{c}, \mathbf{y})</math> < <math>Dd \over 2</math>. ''Then the deterministic GMD algorithm outputs'' <math>\mathbf{c}</math>.
 
Note that a [http://en.wikipedia.org/wiki/[Concatenated_codes |naive decoding algorithm for concatenated codes]] can correct up to <math>Dd \over 4</math> errors.
 
'''Lemma 1.''' ''Let the assumption in Theorem 1 hold. And if'' <math>\mathbf{y^{\prime\prime}}</math> ''has'' <math>e'</math> ''errors and'' <math>s'</math> ''erasures(when compared with'' <math>\mathbf{c}</math>'') after'' '''Step 1''', ''then'' <math>\mathbb{E}[2e' + s']</math> < <math>D</math>.
Line 72 ⟶ 71:
 
==Modified randomized algorithm==
Note that, in the previous version of the GMD algorithm in step "3", we do not really need to use "fresh" [http://en.wikipedia.org/wiki/Randomness [randomness]] for each <math>i</math>. Now we come up with another randomized version of the GMD algorithm that uses the ''same'' randomness for every <math>i</math>. This idea follows the algorithm below.
 
'''Modified_Randomized_Decoder'''
Line 81 ⟶ 80:
# Run errors and erasure algorithm for <math>C_{out}</math> on <math>\mathbf{y}^{\prime\prime} = (y_1^{\prime\prime},..., y_N^{\prime\prime})</math>.
 
For the proof of '''[[Lemma (mathematics)|Lemma 1]]''', we only use the randomness to show that
 
<math>Pr[y_i^{\prime\prime} = ?] = {2\omega_i \over d}</math>.
Line 107 ⟶ 106:
# Among all the <math>c_\theta</math> output in 4, output the one closest to <math>\mathbf{y}</math>
 
Every loop of 1~4 can be run in [http://en.wikipedia.org/wiki/Polynomial_time#Polynomial_time [polynomial time]], the algorithm above can also be computed in polynomial time.
Specifically, each call to an errors and erasures decoder of < <math>dD/2</math> errors takes <math>O(d)</math> time. Finally, the runtime of the algorithm above is <math>O(NQn^{O(1)} + NT_{out})</math> where <math>T_{out}</math> is the running time of the outer errors and erasures decoder.
 
Line 120 ⟶ 119:
#[http://www.cs.washington.edu/education/courses/cse533/06au University of Washington - Venkatesan Guruswami]
#G. David Forney. Generalized Minimum Distance decoding. ''IEEE Transactions on Information Theory'', 12:125-131, 1966
 
{{uncat}}