Content deleted Content added
Line 34:
'''Proof of lemma 1.''' For every <math>1 \le i \le N,</math> define <math>e_i = \Delta(y_i, c_i).</math> This implies that
Next for every <math>1 \le i \le N</math>, we define two [[indicator variable]]s:
X{_i^?} = 1 &\Leftrightarrow y_i'' = ? \\
X{_i^e} = 1 &\Leftrightarrow C_\text{in}(y_i'') \ne c_i \ \text{and} \ y_i'' \neq ?
\end{align}</math>
We claim that we are done if we can show that for every <math>1 \le i \le N</math>:
Clearly, by definition
Further, by the [[linear]]ity of expectation, we get
To prove (2) we consider two cases: <math>i</math>-th block is correctly decoded ('''Case 1'''), <math>i</math>-th block is incorrectly decoded ('''Case 2'''):
Line 63 ⟶ 58:
Further, by definition we have
'''Case 2:''' <math>(c_i \ne C_\text{in}(y_i'))</math>
Line 73 ⟶ 67:
Finally, this implies
In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of <math>C_\text{out} \circ C_\text{in}</math> up to half its design distance.
|