Binary Goppa code: Difference between revisions

Content deleted Content added
Decoding: Fixed a mistake
 
(7 intermediate revisions by 7 users not shown)
Line 1:
{{Short description|Kind of error correction code}}
In [[mathematics]] and [[computer science]], the '''binary Goppa code''' is an [[error-correcting code]] that belongs to the class of general [[Goppa code]]scodes originally described by [[Valerii Denisovich Goppa]], but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for [[cryptography]] in [[McEliece cryptosystem|McEliece-like cryptosystems]] and similar setups.
 
==Construction and properties==
 
AAn irreducible binary Goppa code is defined by a [[polynomial]] <math>g(x)</math> of degree <math>t</math> over a [[finite field]] <math>GF(2^m)</math> with no repeated roots, and a sequence <math>L_1, ..., L_n</math> of <math>n</math> distinct elements from <math>GF(2^m)</math> that are not roots of <math>g</math>.
 
Codewords belong to the kernel of the syndrome function, forming a subspace of <math>\{0,1\}^n</math>:
 
: <math>\Gamma(g,L)=\left\{ c \in \{0,1\}^n \left|,\Bigg\vert\, \sum_{i=01}^{n-1} \frac{c_i}{x-L_i} \equiv 0 \modbmod g(x) \right. \right\}</math>
 
The code defined by a tuple <math>(g,L)</math> has dimension at least <math>n-mt</math> and
Codedistance definedat byleast a<math>2t+1</math>, tuplethus it can encode messages of length at least <math>(g,L)n-mt</math> hasusing minimumcodewords distanceof size <math>2t+1n</math>, thuswhile itcorrecting canat correctleast <math>\left\lfloor \frac{(2t+1)-1}{2} \right\rfloor</math> errors in a word of size <math>n-mt</math> using codewords of size <math>n</math>. It also possesses a convenient [[parity-check matrix]] <math>H</math> in form
 
: <math>
H=VD=\begin{pmatrix}
1 & 1 & 1 & \cdots & 1\\
L_0L_1^1 & L_1L_2^1 & L_2L_3^1 & \cdots & L_{n-1}^1\\
L_0L_1^2 & L_1L_2^2 & L_2L_3^2 & \cdots & L_{n-1}^2\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
L_0L_1^{t-1} & L_1L_2^{t-1} & L_2L_3^{t-1} & \cdots & L_{n-1}^{t-1}
\end{pmatrix}
\begin{pmatrix}
\frac{1}{g(L_0L_1)} & & & & \\
& \frac{1}{g(L_1L_2)} & & & \\
& & \frac{1}{g(L_2L_3)} & & \\
& & & \ddots & \\
& & & & \frac{1}{g(L_{n-1})}
\end{pmatrix}
</math>
Line 36 ⟶ 38:
Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all <math>t</math> design errors), and is also fairly simple to implement.
 
Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word <math>c=(c_0c_1,\dots,c_{n-1}c_n)</math> is expected to take a form of
 
: <math>s(x) \equiv \sum_{i=01}^{n-1} \frac{c_i}{x - L_i} \mod g(x)</math>
 
Alternative form of a parity-check matrix based on formula for <math>s(x)</math> can be used to produce such syndrome with a simple matrix multiplication.
Line 48 ⟶ 50:
Finally, the ''error locator polynomial'' is computed as <math>\sigma(x) = a(x)^2 + x\cdot b(x)^2</math>. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well.
 
If the original codeword was decodable and the <math>e=(e_0,e_1,\dots,e_{n-1}e_n)</math> was the binary error vector, then
 
: <math>\sigma(x) = \prod_{i=01}^{n-1} e_i(x-L_i)^{e_i} </math>
 
Factoring or evaluating all roots of <math>\sigma(x)</math> therefore gives enough information to recover the error vector and fix the errors.