Binary Goppa code: Difference between revisions

Content deleted Content added
Decoding: small cosmetic fixes
Line 40:
Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a word <math>c=(c_0,\dots,c_{n-1})</math> is expected to take a form of
 
: <math>s(x) =\equiv \sum_{c_i=1} \frac{1}{x - L_i} \mod g(x)</math>
 
Alternative form of a parity-check matrix based on formula for <math>s(x)</math> can be used to produce such syndrome with a simple matrix multiplication.
 
The algorithm then computes <math>v(x)= \equiv \sqrt{s(x)^{-1}-x} \mod g(x)</math>. That fails when <math>s(x)= \equiv 0</math>, but that is the case when the input word is a codeword, so no error correction is necessary.
 
<math>v(x)</math> is reduced to polynomials <math>a(x)</math> and <math>b(x)</math> using the [[extended euclidean algorithm]], so that <math>a(x)= \equiv b(x)\cdot v(x) \mod g(x)</math>, while <math>\deg(a)\leq\lfloor t/2 \rfloor</math> and <math>\deg(b)\leq\lfloor (t-1)/2 \rfloor</math>.
 
Finally, the ''error locator polynomial'' is computed as <math>\sigma(x) = a(x)^2 + x\cdot b(x)^2</math>. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. ErrorNote correction polynomials have to be computedthat in all non-binary cases, separate error correction polynomial has to be computed as well.
 
If the original codeword was decodable and the <math>e=(e_0,e_1,\dots,e_{n-1})</math> was the error vector, then