Content deleted Content added
m Removed "t = " from "t = ( ... formula involving t ... )". |
→Decoding: Fixed a mistake |
||
(8 intermediate revisions by 7 users not shown) | |||
Line 1:
{{Short description|Kind of error correction code}}
In [[mathematics]] and [[computer science]], the '''binary Goppa code''' is an [[error-correcting code]] that belongs to the class of general
==Construction and properties==
Codewords belong to the kernel of the syndrome function, forming a subspace of <math>\{0,1\}^n</math>:▼
: <math>\Gamma(g,L)=\left\{ c \in \{0,1\}^n \
▲Codewords belong to the kernel of syndrome function, forming a subspace of <math>\{0,1\}^n</math>:
The code defined by a tuple <math>(g,L)</math> has dimension at least <math>n-mt</math> and
▲: <math>\Gamma(g,L)=\left\{ c \in \{0,1\}^n \left| \sum_{i=0}^{n-1} \frac{c_i}{x-L_i} \equiv 0 \mod g(x) \right. \right\}</math>
▲Code defined by a tuple <math>(g,L)</math> has minimum distance <math>2t+1</math>, thus it can correct <math>\left\lfloor \frac{(2t+1)-1}{2} \right\rfloor</math> errors in a word of size <math>n-mt</math> using codewords of size <math>n</math>. It also possesses a convenient [[parity-check matrix]] <math>H</math> in form
: <math>
H=VD=\begin{pmatrix}
1 & 1 & 1 & \cdots & 1\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\end{pmatrix}
\begin{pmatrix}
\frac{1}{g(
& \frac{1}{g(
& & \frac{1}{g(
& & & \ddots & \\
& & & & \frac{1}{g(L_{n
\end{pmatrix}
</math>
Line 38:
Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all <math>t</math> design errors), and is also fairly simple to implement.
Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word <math>c=(
: <math>s(x) \equiv \sum_{i=
Alternative form of a parity-check matrix based on formula for <math>s(x)</math> can be used to produce such syndrome with a simple matrix multiplication.
Line 50:
Finally, the ''error locator polynomial'' is computed as <math>\sigma(x) = a(x)^2 + x\cdot b(x)^2</math>. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well.
If the original codeword was decodable and the <math>e=(
: <math>\sigma(x) = \prod_{i=
Factoring or evaluating all roots of <math>\sigma(x)</math> therefore gives enough information to recover the error vector and fix the errors.
|