Content deleted Content added
Importing Wikidata short description: "Kind of error correction code" |
→Decoding: Fixed a mistake |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 8:
Codewords belong to the kernel of the syndrome function, forming a subspace of <math>\{0,1\}^n</math>:
: <math>\Gamma(g,L)=\left\{ c \in \{0,1\}^n \
The code defined by a tuple <math>(g,L)</math> has dimension at least <math>n-mt</math> and
Line 38:
Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all <math>t</math> design errors), and is also fairly simple to implement.
Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word <math>c=(
: <math>s(x) \equiv \sum_{i=
Alternative form of a parity-check matrix based on formula for <math>s(x)</math> can be used to produce such syndrome with a simple matrix multiplication.
Line 50:
Finally, the ''error locator polynomial'' is computed as <math>\sigma(x) = a(x)^2 + x\cdot b(x)^2</math>. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well.
If the original codeword was decodable and the <math>e=(
: <math>\sigma(x) = \prod_{i=
Factoring or evaluating all roots of <math>\sigma(x)</math> therefore gives enough information to recover the error vector and fix the errors.
|