Generalized minimum-distance decoding: Difference between revisions

Content deleted Content added
fixed major formatting issues (more problems remain)
m Reflist
 
(9 intermediate revisions by 6 users not shown)
Line 1:
In [[coding theory]], '''generalized minimum-distance (GMD) decoding''' provides an efficient [[algorithm]] for decoding [[concatenated code]]s, which is based on using an [[error]]s-and-[[Erasure code|erasures]] decoder for the [[outer code]].
 
A [[Concatenated error correction code#Decoding concatenated codes|naive decoding algorithm]] for concatenated codes can not be an optimal way of decoding because it does not take into account the information that [[maximum likelihood decoding]] (MLD) gives. In other words, in the naive algorithm, inner received [[Code word (communication)|codeword]]s are treated the same regardless of the difference between their [[hamming distance]]s. Intuitively, the outer decoder should place higher confidence in symbols whose inner [[code|encodings]] are close to the received word. [[David Forney]] in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of [[soft-decision decoder]]s. We will present three versions of the GMD decoding algorithm. The first two will be [[randomized algorithm]]s while the last one will be a [[deterministic algorithm]].
 
==Setup==
Line 15:
 
==Randomized algorithm==
Consider the received word <math>\mathbf{y} = (y_1,\ldots,y_N) \in [q^n]^N</math> which was corrupted by a [[noisy channel]]. The following is the algorithm description for the general case. In this algorithm, we can decode y by just declaring an erasure at every bad position and running the errors and erasure decoding algorithm for <math>C_\text{out}</math> on the resulting vector.
 
'''Randomized_Decoder'''
Line 24:
# Run errors and erasure algorithm for <math>C_\text{out}</math> on <math>\mathbf{y}'' = (y_1'', \ldots, y_N'')</math>.
 
'''Theorem 1.''' ''Let y be a received word such that there exists a [[Code word (communication)|codeword]]'' <math>\mathbf{c} = (c_1,\cdots, c_N) \in C_\text{out}\circ{C_\text{in}} \subseteq [q^n]^N</math> ''such that'' <math>\Delta(\mathbf{c}, \mathbf{y}) < \tfrac{Dd}{2}</math>. ''Then the deterministic GMD algorithm outputs'' <math>\mathbf{c}</math>.
 
Note that a [[Concatenated codes|naive decoding algorithm for concatenated codes]] can correct up to <math>\tfrac{Dd}{4}</math> errors.
Line 34:
'''Proof of lemma 1.''' For every <math>1 \le i \le N,</math> define <math>e_i = \Delta(y_i, c_i).</math> This implies that
 
:<math display="block">\sum_{i=1}^N e_i < \frac{Dd}{2} \qquad\qquad (1)</math>
 
Next for every <math>1 \le i \le N</math>, we define two [[indicator variable]]s:
 
: <math display="block">\begin{align}
X{_i^?} = 1 &\Leftrightarrow y_i'' = ? \\
X{_i^e} = 1 &\Leftrightarrow C_\text{in}(y_i'') \ne c_i \ \text{and} \ y_i'' \neq ?
\end{align}</math>
 
We claim that we are done if we can show that for every <math>1 \le i \le N</math>:
 
: <math display="block">\mathbb{E} \left [2X{_i^e + X{_i^?}} \right ] \leqslant {2e_i \over d}\qquad\qquad (2)</math>
Clearly, by definition
 
:<math display="block">e' = \sum_i X_i^e \quad \text{and} \quad s' = \sum_i X_i^?.</math>
Clearly, by definition
Further, by the [[linear]]ity of expectation, we get
 
:<math>e' = \sum_i X_i^e \quad \text{and} \quad s' = \sum_i X_i^?.</math>
 
Further, by the [[linear]]ity of expectation, we get
 
:<math>\mathbb{E}[2e' + s'] \leqslant \frac{2}{d}\sum_ie_i < D.</math>
 
:<math display="block">\mathbb{E}[2e' + s'] \leqslant \frac{2}{d}\sum_ie_i < D.</math>
To prove (2) we consider two cases: <math>i</math>-th block is correctly decoded ('''Case 1'''), <math>i</math>-th block is incorrectly decoded ('''Case 2'''):
 
Line 63 ⟶ 58:
Further, by definition we have
 
: <math display="block">\omega_i = \min \left (\Delta(C_\text{in}(y_i'), y_i), \tfrac{d}{2} \right ) \leqslant \Delta(C_\text{in}(y_i'), y_i) = \Delta(c_i, y_i) = e_i</math>
 
'''Case 2:''' <math>(c_i \ne C_\text{in}(y_i'))</math>
 
In this case, <math>\mathbb{E}[X_i^?] = \tfrac{2\omega_i}{d}</math> and <math>\mathbb{E}[X_i^e] = \Pr[X_i^e = 1] = 1 - \tfrac{2\omega_i}{d}.</math>
 
Since <math>c_i \ne C_\text{in}(y_i'), e_i + \omega_i \geqslant d</math>. This follows [another case analysis<ref>{{cite web|url=https://cse.buffalo.edu/faculty/atri/courses/coding-theory/lectures/lect28.pdf |title=Lecture 28: Generalized Minimum Distance Decoding |date=November 5, 2007 |archive-url=https://web.archive.org/web/20110606191851/http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures/lect28.pdf another|archive-date=2011-06-06 case analysis]|url-status=live}}</ref> when <math>(\omega_i = \Delta(C_\text{in}(y_i'), y_i) < \tfrac{d}{2})</math> or not.
 
Finally, this implies
 
: <math display="block">\mathbb{E}[2X_i^e + X_i^?] = 2 - {2\omega_i \over d} \le {2e_i \over d}.</math>
 
In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of <math>C_\text{out} \circ C_\text{in}</math> up to half its design distance.
 
Line 89 ⟶ 82:
For the proof of '''[[Lemma (mathematics)|Lemma 1]]''', we only use the randomness to show that
 
: <math display="block">\Pr[y_i'' = ?] = {2\omega_i \over d}.</math>
 
In this version of the GMD algorithm, we note that
 
: <math display="block">\Pr[y_i'' = ?] = \Pr \left [\theta \in \left [0, \tfrac{2\omega_i}{d} \right ] \right ] = \tfrac{2\omega_i}{d}.</math>
 
The second [[Equality (mathematics)|equality]] above follows from the choice of <math>\theta</math>. The proof of '''Lemma 1''' can be also used to show <math>\mathbb{E}[2e' + s'] < D</math> for version2 of GMD. In the next section, we will see how to get a deterministic version of the GMD algorithm by choosing <math>\theta</math> from a polynomially sized set as opposed to the current infinite set <math>[0, 1]</math>.
 
Line 100 ⟶ 91:
Let <math>Q = \{0,1\} \cup \{{2\omega_1 \over d}, \ldots,{2\omega_N \over d}\}</math>. Since for each <math>i, \omega_i = \min(\Delta(\mathbf{y_i'}, \mathbf{y_i}), {d \over 2})</math>, we have
 
: <math display="block">Q = \{0, 1\} \cup \{q_1, \ldots,q_m\}</math>
 
where <math>q_1 < \cdots < q_m</math> for some <math>m \le \left \lfloor \frac{d}{2} \right \rfloor</math>. Note that for every <math>\theta \in [q_i, q_{i+1}]</math>, the step 1 of the second version of randomized algorithm outputs the same <math>\mathbf{y}''.</math>. Thus, we need to consider all possible value of <math>\theta \in Q</math>. This gives the deterministic algorithm below.
 
Line 115 ⟶ 105:
 
==See also==
#* [[Concatenated code]]s
#* [[Reed Solomon|Reed Solomon error correction]]
#* [[Berlekamp–Welch algorithm|Welch Berlekamp algorithm]]
 
==References==
{{Reflist}}
#[http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures University at Buffalo Lecture Notes on Coding Theory – Atri Rudra]
#* [httphttps://peoplecse.csail.mitbuffalo.edu/madhufaculty/atri/courses/coding-theory/lectures/FT01 MITUniversity at Buffalo Lecture Notes on Essential Coding Theory – MadhuAtri SudanRudra]
#* [http://wwwpeople.cscsail.washingtonmit.edu/educationmadhu/courses/cse533/06auFT01 MIT Lecture Notes on UniversityEssential ofCoding WashingtonTheoryVenkatesanMadhu GuruswamiSudan]
* [http://www.cs.washington.edu/education/courses/cse533/06au University of Washington – Venkatesan Guruswami]
#* G. David Forney. Generalized Minimum Distance decoding. ''IEEE Transactions on Information Theory'', 12:125–131, 1966
 
{{DEFAULTSORT:Generalized minimum distance decoding}}