Content deleted Content added
m →Randomized algorithm: Typo patrol, typos fixed: Futher → Further using AWB (7794) |
m Reflist |
||
(19 intermediate revisions by 13 users not shown) | |||
Line 1:
In [[coding theory]], '''generalized minimum
A [[Concatenated error correction code#Decoding concatenated codes|naive decoding algorithm]] for concatenated codes can not be an optimal way of decoding because it does not take into account the information that [[maximum likelihood decoding]] (MLD) gives. In other words, in the naive algorithm, inner received [[Code word (communication)|codeword]]s are treated the same regardless of the difference between their [[hamming distance]]s. Intuitively, the outer decoder should place higher confidence in symbols whose inner [[code|encodings]] are close to the received word. [[David Forney]] in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of [[soft-decision decoder]]s. We will present three versions of the GMD decoding algorithm. The first two will be [[randomized algorithm]]s while the last one will be a [[deterministic algorithm]].▼
▲In [[coding theory]], '''generalized minimum distance (GMD) decoding''' provides an efficient [[algorithm]] for decoding [[concatenated code]]s, which is based on using an [[error]]s-and-[[Erasure code|erasures]] [[decoder]] for the [[outer code]].
▲A [[Concatenated error correction code#Decoding concatenated codes|naive decoding algorithm]] for concatenated codes can not be an optimal way of decoding because it does not take into account the information that [[maximum likelihood decoding]] (MLD) gives. In other words, in the naive algorithm, inner received [[codeword]]s are treated the same regardless of the difference between their [[hamming distance]]s. Intuitively, the outer decoder should place higher confidence in symbols whose inner [[code|encodings]] are close to the received word. [[David Forney]] in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of [[soft-decision decoder]]s. We will present three versions of the GMD decoding algorithm. The first two will be [[randomized algorithm]]s while the last one will be a [[deterministic algorithm]].
==Setup==
::<math>C_\text{out} = [Q]^K \ :and ::<math>\mathbb{E}[X] = \sum_x \Pr[X = x].</math> ==Randomized algorithm==
Consider the received word <math>\mathbf{y} = (y_1,\ldots,y_N) \in [q^n]^N</math> which was corrupted by a [[noisy channel]]. The following is the algorithm description for the general case. In this algorithm, we can decode y by just declaring an erasure at every bad position and running the errors and erasure decoding algorithm for <math>C_\text{out}</math> on the resulting vector.
'''Randomized_Decoder'''
<br />'''Given : '''<math>\mathbf{y} = (y_1,\dots,y_N) \in [q^n]^N</math>.
# For every <math>1 \le i \le N</math>, compute <math>y_i
# Set <math>\omega_i = \min(\Delta(C_\text{in}(y_i
# For every <math>1 \le i \le N</math>, repeat : With probability <math>2\omega_i \over d</math>, set <math>y_i
# Run errors and erasure algorithm for <math>C_\text{out}</math> on <math>\mathbf{y}
'''Theorem 1.''' ''Let y be a received word such that there exists a [[Code word (communication)|codeword]]'' <math>\mathbf{c} = (c_1,\
Note that a [[
:'''Lemma 1.'''
''Remark.'' If <math>2e' + s' < D</math>, then the algorithm in '''Step 2''' will output <math>\mathbf{c}</math>. The lemma above says that in expectation, this is indeed the case. Note that this is not enough to prove '''Theorem 1''', but can be crucial in developing future variations of the algorithm.
'''Proof of lemma 1.''' For every <math>1 \le i \le N,</math>
<math display="block">\sum_{i=1}^N e_i <
Next for every <math>1 \le i \le N</math>, we define two [[
<math display="block">\begin{align}
▲Next for every <math>1 \le i \le N</math>, we define two [[Indicator variable|indicator variables]]:
X{_i^?} = 1 &\Leftrightarrow y_i'' = ? \\
X{_i^e} = 1 &\Leftrightarrow C_\text{in}(y_i'') \ne c_i \ \text{and} \ y_i'' \neq ?
\end{align}</math>
We claim that we are done if we can show that for every <math>1 \le i \le N</math>:▼
Clearly, by definition
<math display="block">e' = \sum_i X_i^e \quad \text{and} \quad s' = \sum_i X_i^?.</math>
Further, by the [[linear]]ity of expectation, we get
To prove (2) we consider two cases: <math>i</math>-th block is correctly decoded ('''Case 1'''), <math>i</math>-th block is incorrectly decoded ('''Case 2'''):
▲We claim that we are done if we can show that for every <math>1 \le i \le N</math>:
'''Case 1:''' <math>(c_i = C_\text{in}(y_i'))</math>
Note that if <math>y_i
<math>\mathbb{E}[X_i^?] = \Pr[X_i^? = 1] = {2\omega_i \over d}</math>, and <math>\mathbb{E}[X_i^e] = \Pr[X_i^e = 1] = 0</math>.▼
Further, by definition we have
'''Case 2:''' <math>(c_i \ne C_\text{in}(y_i'))</math>
▲In this case, <math>\mathbb{E}[X_i^?] = \
Since <math>c_i \ne C_\text{in}(y_i')
Finally, this implies
In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of <math>C_\text{out} \circ C_\text{in}</math> up to half its design distance.
Line 81 ⟶ 75:
'''Modified_Randomized_Decoder'''
<br />'''Given : '''<math>\mathbf{y} = (y_1, \ldots,y_N) \in [q^n]^N</math>, pick <math>\theta \in [0, 1]</math> at random. Then every for every <math>1 \le i \le N</math>:
# Set <math>y_i
# Compute <math>\omega_i = \min(\Delta(C_\text{in}(y_i
# If <math>\theta<
# Run errors and erasure algorithm for <math>C_\text{out}</math> on <math>\mathbf{y}
For the proof of '''[[Lemma (mathematics)|Lemma 1]]''', we only use the randomness to show that
In this version of the GMD algorithm, we note that
The second [[Equality (mathematics)|equality]] above follows from the choice of <math>\theta</math>. The proof of '''Lemma 1''' can be also used to show <math>\mathbb{E}[2e' + s'] < D</math>
▲The second [[Equality (mathematics)|equality]] above follows from the choice of <math>\theta</math>. The proof of '''Lemma 1''' can be also used to show <math>\mathbb{E}[2e' + s']</math> < <math>D</math> for version2 of GMD.
==Deterministic algorithm==
Let <math>Q = \{0,1\} \cup \{{2\omega_1 \over d}, \ldots,{2\omega_N \over d}\}</math>. Since for each <math>i, \omega_i = \min(\Delta(\mathbf{y_i'}, \mathbf{y_i}), {d \over 2})</math>, we have
where <math>q_1
▲where <math>q_1 < q_2 < \cdots < q_m</math> for some <math>m \le \left \lfloor \frac{d}{2} \right \rfloor</math>. Note that for every <math>\theta \in [q_i, q_{i+1}]</math>, the step 1 of the second version of randomized algorithm outputs the same <math>\mathbf{y^{\prime\prime}}</math>. Thus, we need to consider all possible value of <math>\theta \in Q</math>. This gives the deterministic algorithm below.
'''Deterministic_Decoder'''
<br />''' Given : '''<math>\mathbf{y} = (y_1,\ldots,y_N) \in [q^n]^N</math>, for every <math>\theta \in Q</math>, repeat the following.
# Compute <math>y_i
# Set <math>\omega_i = \min(\Delta(C_\text{in}(y_i
# If <math>\theta
# Run errors-and-erasures algorithm for <math>C_\text{out}</math> on <math>\mathbf{y
# Among all the <math>c_\theta</math> output in 4, output the one closest to <math>\mathbf{y}</math>
Every loop of 1~4 can be run in [[polynomial time]], the algorithm above can also be computed in polynomial time. Specifically, each call to an errors and erasures decoder of <math><dD/2</math> errors takes <math>O(d)</math> time. Finally, the runtime of the algorithm above is <math>O(NQn^{O(1)} + NT_\text{out})</math> where <math>T_\text{out}</math> is the running time of the outer errors and erasures decoder.
==See also==
==References==
{{Reflist}}
* [http://www.cs.washington.edu/education/courses/cse533/06au University of Washington – Venkatesan Guruswami]
{{DEFAULTSORT:Generalized minimum distance decoding}}
|