Content deleted Content added
closer to WP:MOS, WP:MOSMATH, and standart TeX usage; more work is probably needed here. |
|||
Line 1:
In [[coding theory]],
▲In [[coding theory]], Generalized Minimum Distance (GMD) Decoding provides an efficient [[algorithm]] for decoding [[concatenated code]]s, which is based on using an [[error|errors]]-and-[[Erasure_code|erasures]] [[decoder]] for the [[outer code]].
A [[
==Setup==
# [[Hamming distance]] : Given two [[vector]]s <math>u, v\in\sum^n</math> the Hamming distance between u and v, denoted by <math>\Delta(u, v)</math>, is defined to be the number of positions in which u and v differ.
# Minimum distance : Let <math>C\subseteq\sum^n</math> be a [[code]]. The minimum distance of code C is defined to be <math>d = \min{\Delta(c_1, c_2)}</math> where <math>c_1 \ne c_2 \in C</math>
# Code concatenation : Given <math>m = (
# Maximum likelihood decoding(MLD) : MLD is a decoding method for error correcting codes, which outputs the codeword closest to the received word in Hamming distance. The MLD function denoted by <math>D_{MLD} : \sum^n \rightarrow C</math> is defined as follows. For every <math>y\in\sum_n</math>, <math>D_{MLD}(y) = \arg \min_{c \in C}\Delta(c, y)</math>.
# [[Probability density function]] : A [[probability distribution]] <math>\Pr[\bullet]</math> on a sample space <math>S</math> is a mapping from events of <math>S</math> to [[real number]]s such that <math>\Pr[A] \ge 0</math> for any event <math>A</math>, <math>\Pr[S] = 1</math>, and <math>\Pr[A \cup B] = \Pr[A] + \Pr[B]</math> for any two mutually exclusive events <math>A</math> and <math>B</math>
Line 13 ⟶ 12:
==Randomized algorithm==
Consider the received word <math>\mathbf{y} = (y_1,
'''Randomized_Decoder'''
<br />'''Given : '''<math>\mathbf{y} = (y_1,
# For every <math>1 \le i \le N</math>, compute <math>y_i^\prime = MLD_{C_\text{in}}(y_i)</math>.
# Set <math>\omega_i = \min(\Delta(C_\text{in}(y_i^\prime), y_i), {d\over2})</math>.
# For every <math>1 \le i \le N</math>, repeat : With probability <math>2\omega_i \over d</math>, set <math>y_i^{\prime\prime} \leftarrow</math> ?, otherwise set <math>y_i^{\prime\prime} = y_i'</math>.
# Run errors and erasure algorithm for <math>C_\text{out}</math> on <math>\mathbf{y}^{\prime\prime} = (y_1^{\prime\prime}, \ldots, y_N^{\prime\prime})</math>.
'''Theorem 1.''' ''Let y be a received word such that there exists a [[codeword]]'' <math>\mathbf{c} = (c_1,
Note that a [[Concatenated_codes|naive decoding algorithm for concatenated codes]] can correct up to <math>Dd \over 4</math> errors.
Line 29 ⟶ 28:
If <math>2e' + s'</math> < <math>D</math>, then the algorithm in '''Step 2''' will output <math>\mathbf{c}</math>. The lemma above says that in expectation, this is indeed the case. Note that this is not enough to prove '''Theorem 1''', but can be crucial in developing future variations of the algorithm.
'''Proof of lemma 1.''' For every <math>1 \le i \le N</math>, define <math>e_i = \Delta(y_i, c_i)</math>. This implies that
<math>\sum_{i=1}^
Next for every <math>1 \le i \le N</math>, we define two [http://en.wikipedia.org/wiki/Indicator_variable indicator variables]:
: <math>X{_i^?} = 1</math> iff <math>y_i^{\prime\prime}
and : <math>X{_i^e} = 1</math> iff <math>C_\text{in}(y_i^{\prime\prime}) \ne c_i</math> and : <math>y_i^{\prime\prime} \ne ?.</math> We claim that we are done if we can show that for every <math>1 \le i \le N</math>:
: <math>\mathbb{E}[2X{_i^e + X{_i^?}}] \le {{2e_i} \over d}
Clearly, by definition <math>e' = \sum_{i}X{_i^e}</math> and <math>s' = \sum_{i}X{_i^?}</math>. Futher, by the [http://en.wikipedia.org/wiki/Linear linearity] of expectation, we get <math>\mathbb{E}[2e' + s'] \le {2 \over d}\sum_ie_i</math> < <math>D</math>. We consider two cases to prove (2) : <math>i'th</math> block is correctly decoded('''Case 1'''), <math>i'th</math> block is incorrectly decoded('''Case 2''')
'''Case 1:''' <math>(c_i = C_\text{in}(y_i'))</math>
Note that if <math>y_i^{\prime\prime} = ?</math> then <math>X_i^e = 0</math>, and <math>\Pr[y_i^{\prime\prime} = ?] = {2\omega_i \over d}</math> implies
<math>\mathbb{E}[X_i^?] = \Pr[X_i^? = 1] = {2\omega_i \over d}</math>, and <math>\mathbb{E}[X_i^e] = \Pr[X_i^e = 1] = 0</math>.
Further, by definition we have
: <math>\omega_i = \min(\Delta(C_\text{in}(y_i'), y_i), {d \over 2}) \le \Delta(C_\text{in}(y_i'), y_i) = \Delta(c_i, y_i) = e_i</math>
'''Case 2:''' <math>(c_i \ne C_\text{in}(y_i'))</math>
In this case,
: <math>\mathbb{E}[X_i^?] = {2\omega_i \over d}</math>and <math>\mathbb{E}[X_i^e] = \Pr[X_i^e = 1] = 1 - {2\omega_i \over d}.</math>
Since <math>c_i \ne C_\text{in}(y_i')</math>, <math>e_i + \omega_i \ge d</math>. This follows [http://www.cse.buffalo.edu/~atri/courses/coding-theory/lectures/lect28.pdf another case analysis] when <math>(\omega_i = \Delta(C_\text{in}(y_i'), y_i)</math> < <math>{d \over 2})</math> or not.
Finally, this implies
: <math>\mathbb{E}[2X_i^e + X_i^?] = 2 - {2\omega_i \over d} \le {2e_i \over d}.</math>
In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of <math>C_\text{out} \circ C_\text{in}</math> up to half its design distance.
==Modified randomized algorithm==
Line 75 ⟶ 79:
'''Modified_Randomized_Decoder'''
<br />'''Given : '''<math>\mathbf{y} = (y_1, \ldots,y_N) \in [q^n]^N</math>, pick <math>\theta \in [0, 1]</math> at random. Then every for every <math>1 \le i \le N</math>:
# Set <math>y_i^\prime = MLD_{C_\text{in}}(y_i)</math>.
# Compute <math>\omega_i = \min(\Delta(C_\text{in}(y_i^\prime), y_i), {d\over2})</math>.
# If <math>\theta</math> < <math>{2\omega_i \over d}</math>, set <math>y_i^{\prime\prime} \leftarrow</math> ?, otherwise set <math>y_i^{\prime\prime} = y_i'</math>.
# Run errors and erasure algorithm for <math>C_\text{out}</math> on <math>\mathbf{y}^{\prime\prime} = (y_1^{\prime\prime},
For the proof of '''[[Lemma (mathematics)|Lemma 1]]''', we only use the randomness to show that
: <math>\Pr[y_i^{\prime\prime} = ?] = {2\omega_i \over d}.</math>
In this version of the GMD algorithm, we note that
: <math>\Pr[y_i^{\prime\prime} = ?] = \Pr[\theta \in [0, {2\omega_i \over d}]] = {2\omega_i \over d}.</math>
The second [[Equality (mathematics)|equality]] above follows from the choice of <math>\theta</math>. The proof of '''Lemma 1''' can be also used to show <math>\mathbb{E}[2e' + s']</math> < <math>D</math> for version2 of GMD.
Line 94 ⟶ 98:
Let <math>Q = \{0,1\} \cup \{{2\omega_1 \over d}, \ldots,{2\omega_N \over d}\}</math>. Since for each <math>i, \omega_i = \min(\Delta(\mathbf{y_i'}, \mathbf{y_i}), {d \over 2})</math>, we have
: <math>Q = \{0, 1\} \cup \{q_1, \ldots,q_m\}</math>
where <math>q_1
'''Deterministic_Decoder'''
<br />''' Given : '''<math>\mathbf{y} = (y_1,
# Compute <math>y_i^\prime = MLD_{C_\text{in}}(y_i)</math> for <math>1 \le i \le N</math>.
# Set <math>\omega_i = \min(\Delta(C_\text{in}(y_i^\prime), y_i), {d\over2})</math> for every <math>1 \le i \le N</math>.
# If <math>\theta</math> < <math>{2\omega_i \over d}</math>, set <math>y_i^{\prime\prime} \leftarrow</math> ?, otherwise set <math>y_i^{\prime\prime} = y_i'</math>.
# Run errors-and-erasures algorithm for <math>C_\text{out}</math> on <math>\mathbf{y^{\prime\prime}} = (y_1^{\prime\prime}, \ldots, y_N^{\prime\prime})</math>. Let <math>c_\theta</math> be the codeword in <math>C_\text{out} \circ C_\text{in}</math> corresponding to the output of the algorithm, if any.
# Among all the <math>c_\theta</math> output in 4, output the one closest to <math>\mathbf{y}</math>
Every loop of 1~4 can be run in [[polynomial time]], the algorithm above can also be computed in polynomial time.
Specifically, each call to an errors and erasures decoder of < <math>dD/2</math> errors takes <math>O(d)</math> time. Finally, the runtime of the algorithm above is <math>O(NQn^{O(1)} + NT_\text{out})</math> where <math>T_\text{out}</math> is the running time of the outer errors and erasures decoder.
==See also==
|