Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0 |
Fix mathematics. The condition for L and the syndrome+error locator polynomial computation were corrected to nonsense either by careless automatic edits or careless editors. |
||
Line 5:
A binary Goppa code is defined by a [[polynomial]] <math>g(x)</math> of degree <math>t</math> over a [[finite field]] <math>GF(2^m)</math> without multiple zeros, and a sequence <math>L</math> of <math>n</math> distinct elements from <math>GF(2^m)</math> that aren't roots of the polynomial:
: <math>\forall i,j \in \{0,\ldots,n-1\}: L_i \in GF(2^m) \land (
Codewords belong to the kernel of syndrome function, forming a subspace of <math>\{0,1\}^n</math>:
Line 38:
Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all <math>t</math> design errors), and is also fairly simple to implement.
Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word <math>c=(c_0,\dots,c_{n-1})</math> is expected to take a form of
: <math>s(x) \equiv \sum_{i=0}^{n-1} \frac{
Alternative form of a parity-check matrix based on formula for <math>s(x)</math> can be used to produce such syndrome with a simple matrix multiplication.
Line 50:
Finally, the ''error locator polynomial'' is computed as <math>\sigma(x) = a(x)^2 + x\cdot b(x)^2</math>. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well.
If the original codeword was decodable and the <math>e=(e_0,e_1,\dots,e_{n-1})</math> was the binary error vector, then
: <math>\sigma(x) = \prod_{i=0}^{n-1} e_i(x-L_i) </math>
Factoring or evaluating all roots of <math>\sigma(x)</math> therefore gives enough information to recover the error vector and fix the errors.
|