BCH code: Difference between revisions

Content deleted Content added
Link suggestions feature: 3 links added.
Tags: Visual edit Mobile edit Mobile web edit Newcomer task Suggested: add links
 
(42 intermediate revisions by 27 users not shown)
Line 1:
{{short description|Error correction code}}
In [[coding theory]], the '''BCH codes''' or '''Bose&ndash;Chaudhuri&ndash;Hocquenghem codes''' ('''BCH codes''') form a class of [[cyclic code|cyclic]] [[Error correction code|error-correcting codes]] that are constructed using [[polynomial]]s over a [[finite field]] (also called a ''[[Finite field|Galois field]]''). BCH codes were invented in 1959 by French mathematician [[Alexis Hocquenghem]], and independently in 1960 by [[Raj Chandra Bose|Raj Bose]] and [[D.K. Ray-Chaudhuri|D. K. Ray-Chaudhuri]].<ref>{{Harvnb|Reed|Chen|1999|p=189}}</ref><ref>{{harvnb|Hocquenghem|1959}}</ref><ref>{{harvnb|Bose|Ray-Chaudhuri|1960}}</ref> The name ''Bose&ndash;Chaudhuri&ndash;Hocquenghem'' (and the acronym ''BCH'') arises from the initials of the inventors' surnames (mistakenly, in the case of Ray-Chaudhuri).
 
One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an [[Abstract algebra|algebraic]] method known as [[syndrome decoding]]. This simplifies the design of the decoder for these codes, using small low-power electronic hardware.
 
BCH codes are used in applications such as satellite communications,<ref>{{cite web|title=Phobos Lander Coding System: Software and Analysis|url=http://ipnpr.jpl.nasa.gov/progress_report/42-94/94V.PDF |accessdatearchive-url=https://ghostarchive.org/archive/20221009/http://ipnpr.jpl.nasa.gov/progress_report/42-94/94V.PDF |archive-date=2022-10-09 |url-status=live|access-date=25 February 2012}}</ref> [[compact disc]] players, [[DVD]]s, [[Disk storage|disk drives]], [[USB flash drive]]s, [[solid-state drive]]s,<ref>{{cite webbook|titlechapter=SandforceBCH SFCodes for Solid-2500State-Drives|doi=10.1007/2600978-981-13-0599-3_11 Product Brief|chapter-url=httphttps://wwwlink.sandforcespringer.com/indexchapter/10.php?id1007/978-981-13-0599-3_11|access-date=133&parentId23 September 2023 |title=2&topInside Solid State Drives (SSDS) |series=1Springer Series in Advanced Microelectronics |accessdatedate=252018 February|last1=Marelli |first1=Alessia |last2=Micheloni |first2=Rino |volume=37 |pages=369–406 |isbn=978-981-13-0598-6 2012}}</ref> and [[Bar codesBarcode|two-dimensional bar codes]].
 
== Definition and illustration ==
Line 10 ⟶ 11:
Given a [[prime number]] {{mvar|q}} and [[prime power]] {{math|''q''<sup>''m''</sup>}} with positive integers {{mvar|m}} and {{mvar|d}} such that {{math|''d'' ≤ ''q''<sup>''m''</sup> − 1}}, a primitive narrow-sense BCH code over the [[finite field]] (or Galois field) {{math|GF(''q'')}} with code length {{math|''n'' {{=}} ''q''<sup>''m''</sup> − 1}} and [[Block code#The distance d|distance]] at least {{mvar|d}} is constructed by the following method.
 
Let {{mvar|α}} be a [[PrimitiveSimple element (finite field)extension#Definition|primitive element]] of {{math|GF(''q''<sup>''m''</sup>)}}.
For any positive integer {{mvar|i}}, let {{math|''m''<sub>''i''</sub>(''x'')}} be the [[minimal polynomial (field theory)|minimal polynomial]] with coefficients in {{math|GF(''q'')}} of {{math|α<sup>''i''</sup>}}.
The [[generator polynomial]] of the BCH code is defined as the [[least common multiple]] {{math|''g''(''x'') {{=}} lcm(''m''<sub>1</sub>(''x''),…,''m''<sub>''d'' − 1</sub>(''x''))}}.
It can be seen that {{math|''g''(''x'')}} is a polynomial with coefficients in {{math|GF(''q'')}} and divides {{math|''x''<sup>''n''</sup> − 1}}.
Therefore, the [[polynomial code]] defined by {{math|''g''(''x'')}} is a cyclic code.
 
==== Example ====
Let {{math|''q'' {{=}} 2}} and {{math|''m'' {{=}} 4}} (therefore {{math|''n'' {{=}} 15}}). We will consider different values of {{mvar|d}}. Forfor {{math|GF(16) {{=}} GF(2<sup>4</sup>)}} based on the reducing polynomial {{math|''xz''<sup>4</sup> + ''xz'' + 1}}, withusing primitive rootelement {{math|''α''(''z'') {{=}} ''xz''+0}}. thereThere are fourteen minimum polynomials {{math|''m''<sub>''i''</sub>(''x'')}} with coefficients in {{math|GF(2)}} satisfying
:<math>m_i\left(\alpha^i\right) \bmod \left(xz^4 + xz + 1\right) = 0.</math>
 
The minimal polynomials of the fourteen powers of {{math|α}} are
:<math>\begin{align}
m_1(x) &= m_2(x) = m_4(x) = m_8(x) = x^4 + x + 1, \\
Line 28 ⟶ 29:
\end{align}</math>
 
The BCH code with <math>d = 2, 3</math> has the generator polynomial
:<math>g(x) = {\rm lcm}(m_1(x), m_2(x)) = m_1(x) = x^4 + x + 1.\,</math>
 
It has minimal [[Hamming distance]] at least 3 and corrects up to one error. Since the generator polynomial is of degree 4, this code has 11 data bits and 4 checksum bits. It is also denoted as: '''(15, 11) BCH''' code.
 
The BCH code with <math>d=4,5</math> has the generator polynomial
:<math>\begin{align}
g(x) &= {\rm lcm}(m_1(x),m_2(x),m_3(x),m_4(x)) = m_1(x) m_3(x) \\
Line 39 ⟶ 40:
\end{align}</math>
 
It has minimal Hamming distance at least 5 and corrects up to two errors. Since the generator polynomial is of degree 8, this code has 7 data bits and 8 checksum bits. It is also denoted as: '''(15, 7) BCH''' code.
 
The BCH code with <math>d=6,7</math> has the generator polynomial
:<math>\begin{align}
g(x) &= {\rm lcm}(m_1(x),m_2(x),m_3(x),m_4(x),m_5(x),m_6(x)) = m_1(x) m_3(x) m_5(x) \\
Line 47 ⟶ 48:
\end{align}</math>
 
It has minimal Hamming distance at least 7 and corrects up to three errors. Since the generator polynomial is of degree 10, this code has 5 data bits and 10 checksum bits. It is also denoted as: '''(15, 5) BCH''' code. (This particular generator polynomial has a real-world application, in the "format patternsinformation" of the [[QR code]].)
 
The BCH code with <math>d=8</math> and higher has the generator polynomial
:<math>\begin{align}
g(x) &= {\rm lcm}(m_1(x),m_2(x),...,m_{14}(x)) = m_1(x) m_3(x) m_5(x) m_7(x)\\
Line 55 ⟶ 56:
\end{align}</math>
 
This code has minimal Hamming distance 15 and corrects 7 errors. It has 1 data bit and 14 checksum bits. It is also denoted as: '''(15, 1) BCH''' code. In fact, this code has only two codewords: 000000000000000 and 111111111111111 (a trivial [[repetition code]]).
 
=== General BCH codes ===
Line 78 ⟶ 79:
The generator polynomial <math>g(x)</math> of a BCH code has coefficients from <math>\mathrm{GF}(q).</math>
In general, a cyclic code over <math>\mathrm{GF}(q^p)</math> with <math>g(x)</math> as the generator polynomial is called a BCH code over <math>\mathrm{GF}(q^p).</math>
The BCH code over <math>\mathrm{GF}(q^m)</math> and generator polynomial <math>g(x)</math> with successive powers of <math>\alpha</math> as roots is one type of [[Reed–Solomon code]] where the decoder (syndromes) alphabet is the same as the channel (data and generator polynomial) alphabet, all elements of <math>\mathrm{GF}(q^m)</math> .<ref>{{Harvnb|Gill|n.d.|p=3}}</ref> The other type of Reed Solomon code is an [[Reed%E2%80%93Solomon_error_correctionReed–Solomon error correction#Reed_Reed &_Solomon Solomon's_original_views original view:_The_codeword_as_a_sequence_of_values |The codeword as a sequence of values|original view Reed Solomon code]] which is not a BCH code.
 
== Properties ==
 
The generator polynomial of a BCH code has degree at most <math>(d-1)m</math>. Moreover, if <math>q=2</math> and <math>c=1</math>, the generator polynomial has degree at most <math>dm/2</math>.
{{Collapse top|title=Proof}}
Line 134:
 
=== Non-systematic encoding: The message as a factor ===
 
The most straightforward way to find a polynomial that is a multiple of the generator is to compute the product of some arbitrary polynomial and the generator. In this case, the arbitrary polynomial can be chosen using the symbols of the message as coefficients.
 
Line 144 ⟶ 143:
:<math>\begin{align}
s(x) &= p(x)g(x)\\
&= \left(x^{20}+x^{18}+x^{17}+x^{15}+x^{14}+x^{13}+x^{11}+x^{10}+x^9+x^8+x^6+x^5+x^4+x^3+x^2+1\right)\left(x^{10}+x^9+x^8+x^6+x^5+x^3+1\right)\\
&= x^{30}+x^{29}+x^{26}+x^{25}+x^{24}+x^{22}+x^{19}+x^{17}+x^{16}+x^{15}+x^{14}+x^{12}+x^{10}+x^9+x^8+x^6+x^5+x^4+x^2+1
\end{align}</math>
Line 152 ⟶ 151:
 
=== Systematic encoding: The message as a prefix ===
 
A systematic code is one in which the message appears verbatim somewhere within the codeword. Therefore, systematic BCH encoding involves first embedding the message polynomial within the codeword polynomial, and then adjusting the coefficients of the remaining (non-message) terms to ensure that <math>s(x)</math> is divisible by <math>g(x)</math>.
 
Line 201 ⟶ 199:
 
The first step is finding, compatible with computed syndromes and with minimal possible <math>t,</math> locator polynomial:
:<math>\Lambda(x) = \prod_{j=1}^t \left(x\alpha^{i_j} - 1\right)</math>
 
TwoThree popular algorithms for this task are:
# [[BCH code#Peterson–Gorenstein–Zierler algorithm|Peterson–Gorenstein–Zierler algorithm]]
# [[Berlekamp–Massey algorithm]]
# [[Reed–Solomon error correction#Euclidean decoder|Sugiyama Euclidean algorithm]]
 
====Peterson–Gorenstein–Zierler algorithm====
<!-- this confuses t (max number of errors that can be corrected) with ν (actual number of errors) -->
[[Peterson's algorithm]] is the step 2 of the generalized BCH decoding procedure. Peterson's algorithm is used to calculate the error locator polynomial coefficients <math> \lambda_1 , \lambda_2, \dots, \lambda_{v} </math> of a polynomial
 
: <math> \Lambda(x) = 1 + \lambda_1 x + \lambda_2 x^2 + \cdots + \lambda_v x^v .</math>
Line 244 ⟶ 243:
set <math>v \leftarrow v -1</math>
continue from the beginning of Peterson's decoding by making smaller <math>S_{v \times v}</math>
| After you have values of <math>\Lambda</math>, you have with you the error locator polynomial.
| Stop Peterson procedure.
}}
Line 268 ⟶ 267:
 
Let
:<math>S(x) = s_c + s_{c+1}x + s_{c+2}x^2 + \cdots + s_{c+d-2}x^{d-2}.</math>
 
:<math>v \leqslant d-1, \lambda_0 \neq 0 \qquad \Lambda(x) = \sum_{i=0}^v\lambda_ixlambda_i x^i = \lambda_0 \prod_{k=0}^{v} \left(\alpha^{-i_k}x - 1\right).</math>
:<math>S(x)=s_c+s_{c+1}x+s_{c+2}x^2+\cdots+s_{c+d-2}x^{d-2}.</math>
:<math>v\leqslant d-1, \lambda_0\neq 0 \qquad \Lambda(x)=\sum_{i=0}^v\lambda_ix^i=\lambda_0 \prod_{k=0}^{v} (\alpha^{-i_k}x-1).</math>
 
And the error evaluator polynomial<ref name="Gill-Forney">{{Harvnb|Gill|n.d.|p=47}}</ref>
 
:<math>\Omega(x) \equiv S(x) \Lambda(x) \bmod{x^{d-1}}</math>
 
Finally:
 
:<math>\Lambda'(x) = \sum_{i=1}^v i \cdot \lambda_i x^{i-1},</math>
 
where
:<math>i \cdot x := \sum_{k=1}^i x.</math>
 
:<math>i\cdot x := \sum_{k=1}^i x.</math>
 
Than if syndromes could be explained by an error word, which could be nonzero only on positions <math>i_k</math>, then error values are
:<math>e_k = -{\alpha^{i_k}\Omega\left(\alpha^{-i_k}\right) \over \alpha^{c\cdot i_k}\Lambda'\left(\alpha^{-i_k}\right)}.</math>
 
For narrow-sense BCH codes, ''c'' = 1, so the expression simplifies to:
:<math>e_k = -{\Omega\left(\alpha^{-i_k}\right) \over \Lambda'\left(\alpha^{-i_k}\right)}.</math>
 
==== Explanation of Forney algorithm computation ====
It is based on [[Lagrange polynomial|Lagrange interpolation]] and techniques of [[generating function]]s.
 
Consider <math>S(x)\Lambda(x),</math> and for the sake of simplicity suppose <math>\lambda_k = 0</math> for <math>k > v,</math> and <math>s_k = 0</math> for <math>k > c + d - 2.</math> Then
 
:<math>S(x)\Lambda(x) = \sum_{j=0}^{\infty}\sum_{i=0}^j s_{j-i+1}\lambda_i x^j.</math>
 
:<math>\begin{align}
Line 311 ⟶ 306:
 
Thanks to <math>v\leqslant d-1</math> we have
:<math>\Omega(x) = -\lambda_0\sum_{j=1}^v e_j\alpha^{c i_j} \prod_{\ell\in\{1,\cdots,v\}\setminus\{j\}} \left (\alpha^{i_\ell}x - 1 \right ).</math>
 
Thanks to <math>\Lambda</math> (the Lagrange interpolation trick) the sum degenerates to only one summand for <math>x = \alpha^{-i_k}</math>
 
:<math>\Omega \left (\alpha^{-i_k} \right ) = -\lambda_0e_klambda_0 e_k\alpha^{c\cdot i_k}\prod_{\ell\in\{1,\cdots,v\}\setminus\{k\}} \left (\alpha^{i_\ell}\alpha^{-i_k} - 1 \right ). </math>
 
To get <math>e_k</math> we just should get rid of the product. We could compute the product directly from already computed roots <math>\alpha^{-i_j}</math> of <math>\Lambda,</math> but we could use simpler form.
 
As [[formal derivative]]
:<math>\Lambda'(x) = \lambda_0\sum_{j=1}^v \alpha^{i_j}\prod_{\ell\in\{1,\cdots,v\}\setminus\{j\}} \left (\alpha^{i_\ell}x -1 1\right ),</math>
 
we get again only one summand in
:<math>\Lambda'\left(\alpha^{-i_k}\right) = \lambda_0\alpha^{i_k}\prod_{\ell\in\{1,\cdots,v\}\setminus\{k\}} \left (\alpha^{i_\ell}\alpha^{-i_k} -1 1\right ).</math>
 
So finally
:<math>e_k = -\frac{\alpha^{i_k}\Omega \left (\alpha^{-i_k} \right )}{\alpha^{c\cdot i_k}\Lambda' \left (\alpha^{-i_k} \right )}.</math>
 
This formula is advantageous when one computes the formal derivative of <math>\Lambda</math> form
:<math>\Lambda(x) = \sum_{i=1}^v \lambda_ixlambda_i x^i</math>
 
yielding:
Line 340 ⟶ 335:
An alternate process of finding both the polynomial Λ and the error locator polynomial is based on Yasuo Sugiyama's adaptation of the [[Extended Euclidean algorithm]].<ref>Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes. Information and Control, 27:87–99, 1975.</ref> Correction of unreadable characters could be incorporated to the algorithm easily as well.
 
Let <math>k_1, ..., k_k</math> be positions of unreadable characters. One creates polynomial localising these positions <math>\Gamma(x) = \prod_{i=1}^k\left(x\alpha^{k_i} - 1\right).</math>
Set values on unreadable positions to 0 and compute the syndromes.
 
Line 390 ⟶ 385:
If <math>\Lambda(x)</math> denotes the polynomial eliminating the influence of these coordinates, we obtain
 
:<math>S(x)\Gamma(x)\Lambda(x) \stackrel{\{k+v, \cdots, d-2\}}{=} 0.</math>
 
In Euclidean algorithm, we try to correct at most <math>\tfrac{1}{2}(d-1-k)</math> errors (on readable positions), because with bigger error count there could be more codewords in the same distance from the received word. Therefore, for <math>\Lambda(x)</math> we are looking for, the equation must hold for coefficients near powers starting from
Line 412 ⟶ 407:
:<math>R(x) = C(x) + x^{13} + x^5 = x^{14} + x^{11} + x^{10} + x^9 + x^5 + x^4 + x^2</math>
In order to correct the errors, first calculate the syndromes. Taking <math>\alpha = 0010,</math> we have <math>s_1 = R(\alpha^1) = 1011,</math> <math>s_2 = 1001,</math> <math>s_3 = 1011,</math> <math>s_4 = 1101,</math> <math>s_5 = 0001,</math> and <math>s_6 = 1001.</math>
Next, apply the Peterson procedure by row-reducing the following [[augmented matrix]].
:<math>\left [ S_{3 \times 3} | C_{3 \times 1} \right ] =
\begin{bmatrix}s_1&s_2&s_3&s_4\\
Line 431 ⟶ 426:
 
==== Decoding with unreadable characters ====
Suppose the same scenario, but the received word has two unreadable characters [ 1 {{color|red|0}} 0 ? 1 1 ? 0 0 {{color|red|1}} 1 0 1 0 0 ]. We replace the unreadable characters by zeros while creating the polynompolynomial reflecting their positions <math>\Gamma(x) = \left(\alpha^8x - 1\right)\left(\alpha^{11}x - 1\right).</math> We compute the syndromes <math>s_1=\alpha^{-7}, s_2=\alpha^{1}, s_3=\alpha^{4}, s_4=\alpha^{2}, s_5=\alpha^{5},</math> and <math>s_6=\alpha^{-7}.</math> (Using log notation which is independent on GF(2<sup>4</sup>) isomorphisms. For computation checking we can use the same representation for addition as was used in previous example. [[Hexadecimal]] description of the powers of <math>\alpha</math> are consecutively 1,2,4,8,3,6,C,B,5,A,7,E,F,D,9 with the addition based on bitwise xor.)
 
Let us make syndrome polynomial
Line 444 ⟶ 439:
 
:<math>\begin{align}
&\begin{pmatrix}S(x)\Gamma(x)\\ x^6\end{pmatrix} &\\ [6pt]
={} &\begin{pmatrix}\alpha^{-7} +\alpha^{4}x+ \alpha^{-1}x^2+ \alpha^{6}x^3+ \alpha^{-1}x^4+ \alpha^{5}x^5 +\alpha^{7}x^6+ \alpha^{-3}x^7 \\ x^6\end{pmatrix} \\ [6pt]
&={} &\begin{pmatrix}\alpha^{7}+ \alpha^{-3}x & 1\\ 1 & 0\end{pmatrix}
\begin{pmatrix}x^6\\ \alpha^{-7} +\alpha^{4}x +\alpha^{-1}x^2 +\alpha^{6}x^3 +\alpha^{-1}x^4 +\alpha^{5}x^5 +2\alpha^{7}x^6 +2\alpha^{-3}x^7\end{pmatrix} \\ [6pt]
&={} &\begin{pmatrix}\alpha^{7}+ \alpha^{-3}x & 1\\ 1 & 0\end{pmatrix}
\begin{pmatrix}\alpha^4 + \alpha^{-5}x & 1\\ 1 & 0\end{pmatrix} \times \\
&\qquad \times \begin{pmatrix} \alpha^{-7}+ \alpha^{4}x+ \alpha^{-1}x^2+ \alpha^{6}x^3+ \alpha^{-1}x^4+ \alpha^{5}x^5\\ \alpha^{-3} +\left(\alpha^{-7}+ \alpha^{3}\right)x+ \left(\alpha^{3}+ \alpha^{-1}\right)x^2+ \left(\alpha^{-5}+ \alpha^{-6}\right)x^3+ \left(\alpha^3+ \alpha^{1}\right)x^4+ 2\alpha^{-6}x^5+ 2x^6\end{pmatrix} \\ [6pt]
&={} &\begin{pmatrix}\left(1+ \alpha^{-4}\right)+ \left(\alpha^{1}+ \alpha^{2}\right)x+ \alpha^{7}x^2 & \alpha^{7}+ \alpha^{-3}x \\ \alpha^4+ \alpha^{-5}x & 1\end{pmatrix}
\begin{pmatrix} \alpha^{-7} + \alpha^{4}x + \alpha^{-1}x^2+ \alpha^{6}x^3+ \alpha^{-1}x^4+ \alpha^{5}x^5\\ \alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\end{pmatrix} \\ [6pt]
&={} &\begin{pmatrix}\alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2 & \alpha^{7}+ \alpha^{-3}x \\ \alpha^4+ \alpha^{-5}x & 1\end{pmatrix}
\begin{pmatrix}\alpha^{-5}+ \alpha^{-4}x & 1\\ 1 & 0 \end{pmatrix} \times \\
&\qquad \times \begin{pmatrix} \alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\\ \left(\alpha^{7}+ \alpha^{-7}\right)+ \left(2\alpha^{-7}+ \alpha^{4}\right)x+ \left(\alpha^{-5}+ \alpha^{-6}+ \alpha^{-1}\right)x^2+ \left(\alpha^{-7}+ \alpha^{-4}+ \alpha^{6}\right)x^3+ \left(\alpha^{4}+ \alpha^{-6}+ \alpha^{-1}\right)x^4+ 2\alpha^{5}x^5\end{pmatrix} \\ [6pt]
={} &= \begin{pmatrix} \alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3 & \alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2\\ \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2 & \alpha^4+ \alpha^{-5}x\end{pmatrix}
\begin{pmatrix} \alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\\ \alpha^{-4}+ \alpha^{4}x+ \alpha^{2}x^2+ \alpha^{-5}x^3\end{pmatrix}.
\end{align}</math>
 
We have reached polynomial of degree at most 3, and as
 
:<math>\begin{pmatrix}-\left(\alpha^4+ \alpha^{-5}x\right) & \alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2\\ \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2 & -\left(\alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3\right)\end{pmatrix} \begin{pmatrix} \alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3 & \alpha^{-3} + \alpha^{5}x + \alpha^{7}x^2\\ \alpha^{3} + \alpha^{-5}x + \alpha^{6}x^2 & \alpha^4 + \alpha^{-5}x\end{pmatrix} = \begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix},</math>
 
we get
 
:<math> \begin{pmatrix}-\left(\alpha^4+ \alpha^{-5}x\right) & \alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2\\ \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2 & -\left(\alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3\right)\end{pmatrix}
\begin{pmatrix}S(x)\Gamma(x)\\ x^6\end{pmatrix} = \begin{pmatrix} \alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\\ \alpha^{-4}+ \alpha^{4}x + \alpha^{2}x^2+ \alpha^{-5}x^3 \end{pmatrix}. </math>
 
Therefore,
Line 468:
:<math>S(x)\Gamma(x)\left(\alpha^{3} + \alpha^{-5}x + \alpha^{6}x^2\right) - \left(\alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3\right)x^6 = \alpha^{-4} + \alpha^{4}x + \alpha^{2}x^2 + \alpha^{-5}x^3.</math>
 
Let <math>\Lambda(x) = \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2.</math> Don't worry that <math>\lambda_0\neq 1.</math> Find by brute force a root of <math>\Lambda.</math> The roots are <math>\alpha^2,</math> and <math>\alpha^{10}</math> (after finding for example <math>\alpha^2</math> we can divide <math>\Lambda</math> by corresponding monom <math>\left(x - \alpha^2\right)</math> and the root of resulting monom could be found easily).
 
Let
Line 474:
:<math>\begin{align}
\Xi(x) &= \Gamma(x)\Lambda(x) = \alpha^3 + \alpha^4x^2 + \alpha^2x^3 + \alpha^{-5}x^4 \\
\Omega(x) &= S(x)\Xi(x) \equiv \alpha^{-4} + \alpha^4x + \alpha^2x^2 + \alpha^{-5}x^3 \bmod{x^6}
\end{align}</math>
 
Line 484:
 
:<math>\begin{align}
e_1 &=-\frac{\Omega(\alpha^4)}{\Xi'(\alpha^{4})} = \frac{\alpha^{-4}+\alpha^{-7}+\alpha^{-5}+\alpha^{7}}{\alpha^{-5}} =\frac{\alpha^{-5}}{\alpha^{-5}}=1 \\
e_2 &=-\frac{\Omega(\alpha^7)}{\Xi'(\alpha^{7})} = \frac{\alpha^{-4}+\alpha^{-4}+\alpha^{1}+\alpha^{1}}{\alpha^{1}}=0 \\
e_3 &=-\frac{\Omega(\alpha^{10})}{\Xi'(\alpha^{10})} = \frac{\alpha^{-4}+\alpha^{-1}+\alpha^{7}+\alpha^{-5}}{\alpha^{7}}=\frac{\alpha^{7}}{\alpha^{7}}=1 \\
e_4 &=-\frac{\Omega(\alpha^{2})}{\Xi'(\alpha^{2})} = \frac{\alpha^{-4}+\alpha^{6}+\alpha^{6}+\alpha^{1}}{\alpha^{6}}=\frac{\alpha^{6}}{\alpha^{6}}=1
\end{align}</math>
 
Line 497:
Let us show the algorithm behaviour for the case with small number of errors. Let the received word is [ 1 {{color|red|0}} 0 ? 1 1 ? 0 0 0 1 0 1 0 0 ].
 
Again, replace the unreadable characters by zeros while creating the polynompolynomial reflecting their positions <math>\Gamma(x) = \left(\alpha^{8}x - 1\right)\left(\alpha^{11}x - 1\right).</math>
Compute the syndromes <math>s_1 = \alpha^{4}, s_2 = \alpha^{-7}, s_3 = \alpha^{1}, s_4 = \alpha^{1}, s_5 = \alpha^{0},</math> and <math>s_6 = \alpha^{2}.</math>
Create syndrome polynomial
Line 610:
|author-link= Alexis Hocquenghem
|title=Codes correcteurs d'erreurs
|language= Frenchfr
|journal= Chiffres
|___location= Paris
Line 616:
|pages= 147–156
|date= September 1959
}}
|issn=
|doi=}}
* {{Citation
|firstfirst1= R. C.
|lastlast1= Bose
|author-link= R. C. Bose
|first2= D. K.
Line 632 ⟶ 631:
|date= March 1960
|issn= 0890-5401
|doi=10.1016/s0019-9958(60)90287-4|url= http://repository.lib.ncsu.edu/bitstream/1840.4/2137/1/ISMS_1959_240.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://repository.lib.ncsu.edu/bitstream/1840.4/2137/1/ISMS_1959_240.pdf |archive-date=2022-10-09 |url-status=live
}}
 
===Secondary sources===
* {{Citation|last=Gill |first=John |title=EE387 Notes #7, Handout #28 |date=n.d. |accessdateaccess-date=April 21, 2010 |pages=42–45 |publisher=Stanford University |url=http://www.stanford.edu/class/ee387/handouts/notes7.pdf |doiarchive-url=https://ghostarchive.org/archive/20221009/http://www.stanford.edu/class/ee387/handouts/notes7.pdf |archive-date=2022-10-09 |url-status=live }}{{dead link|date=AugustJune 20162021|bot=medic}}{{cbignore|bot=medic}} Course notes are apparently being redone for 2012: http://www.stanford.edu/class/ee387/ {{Webarchive|url=https://web.archive.org/web/20130605170343/http://www.stanford.edu/class/ee387/ |date=2013-06-05 }}
* {{Citation
|lastlast1= Gorenstein
|firstfirst1= Daniel
|authorlinkauthor-link= Daniel Gorenstein
|last2= Peterson
|first2= W. Wesley
|authorlink2author-link2= W. Wesley Peterson
|last3= Zierler
|first3 = Neal
|authorlink3author-link3= Neal Zierler
|title= Two-Error Correcting Bose-Chaudhuri Codes are Quasi-Perfect
|journal= Information and Control
Line 653 ⟶ 652:
|pages= 291–294
|year= 1960
|doi= 10.1016/s0019-9958(60)90877-9}}|doi-access= free
}}
* {{Citation
|firstfirst1= Rudolf
|lastlast1= Lidl
|first2= Günter
|last2= Pilz
Line 663:
|publisher= John Wiley
|year= 1999
}}
|url=
|isbn=
|doi=}}
* {{Citation
|firstfirst1= Irving S.
|lastlast1= Reed
|authorlinkauthor-link= Irving S. Reed
|first2= Xuemin
|last2= Chen
Line 677 ⟶ 675:
|year= 1999
|isbn= 0-7923-8528-4
|doi=}}
 
==Further reading==
* {{Citation |last1=Blahut |first1=Richard E. |author-link1=Richard Blahut |title=Algebraic Codes for Data Transmission |edition=2nd |publisher=[[Cambridge University Press]] |year=2003 |isbn=0-521-55374-1}}
* {{Citation
|firstfirst1= W. J.
|lastlast1= Gilbert
|first2= W. K.
|last2= Nicholson
Line 690 ⟶ 688:
|publisher= John Wiley
|year= 2004
}}
|url=
|isbn=
|doi=}}
* {{Citation
|firstfirst1= S.
|lastlast1= Lin
|first2= D.
|last2= Costello
Line 702 ⟶ 698:
|___location= Englewood Cliffs, NJ
|year= 2004
}}
|isbn=
|doi= }}
* {{Citation
|firstfirst1= F. J.
|lastlast1=MacWilliams
|first2= N. J. A.
|last2= Sloane
|authorlink2author-link2= N. J. A. Sloane
|title= The Theory of Error-Correcting Codes
|___location= New York, NY
|publisher= North-Holland Publishing Company
|year= 1977
}}
|isbn=
|doi=}}
* {{Citation
|first = Atri
Line 722 ⟶ 716:
|publisher = University at Buffalo
|url = http://www.cse.buffalo.edu/~atri/courses/coding-theory/
|archive-url = https://web.archive.org/web/2010070212065020121218004156/http://www.cse.buffalo.edu:80/~atri/courses/coding-theory/
|accessdate = April 21, 2010
|archive-date = 20102012-0712-0218
|archive-url = https://web.archive.org/web/20100702120650/http://www.cse.buffalo.edu/~atri/courses/coding-theory/
|archive-date = 2010-07-02
|url-status = dead
}}