Generalized minimum-distance decoding: Difference between revisions

Content deleted Content added
Line 34:
'''Proof of lemma 1.''' For every <math>1 \le i \le N,</math> define <math>e_i = \Delta(y_i, c_i).</math> This implies that
 
:<math display="block">\sum_{i=1}^N e_i < \frac{Dd}{2} \qquad\qquad (1)</math>
 
Next for every <math>1 \le i \le N</math>, we define two [[indicator variable]]s:
 
: <math display="block">\begin{align}
X{_i^?} = 1 &\Leftrightarrow y_i'' = ? \\
X{_i^e} = 1 &\Leftrightarrow C_\text{in}(y_i'') \ne c_i \ \text{and} \ y_i'' \neq ?
\end{align}</math>
 
We claim that we are done if we can show that for every <math>1 \le i \le N</math>:
 
: <math display="block">\mathbb{E} \left [2X{_i^e + X{_i^?}} \right ] \leqslant {2e_i \over d}\qquad\qquad (2)</math>
 
Clearly, by definition
 
:<math display="block">e' = \sum_i X_i^e \quad \text{and} \quad s' = \sum_i X_i^?.</math>
 
Further, by the [[linear]]ity of expectation, we get
 
:<math display="block">\mathbb{E}[2e' + s'] \leqslant \frac{2}{d}\sum_ie_i < D.</math>
 
To prove (2) we consider two cases: <math>i</math>-th block is correctly decoded ('''Case 1'''), <math>i</math>-th block is incorrectly decoded ('''Case 2'''):
 
Line 63 ⟶ 58:
Further, by definition we have
 
: <math display="block">\omega_i = \min \left (\Delta(C_\text{in}(y_i'), y_i), \tfrac{d}{2} \right ) \leqslant \Delta(C_\text{in}(y_i'), y_i) = \Delta(c_i, y_i) = e_i</math>
 
'''Case 2:''' <math>(c_i \ne C_\text{in}(y_i'))</math>
 
Line 73 ⟶ 67:
Finally, this implies
 
: <math display="block">\mathbb{E}[2X_i^e + X_i^?] = 2 - {2\omega_i \over d} \le {2e_i \over d}.</math>
 
In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of <math>C_\text{out} \circ C_\text{in}</math> up to half its design distance.