BCH code: Difference between revisions

Content deleted Content added
Link suggestions feature: 3 links added.
Tags: Visual edit Mobile edit Mobile web edit Newcomer task Suggested: add links
 
(3 intermediate revisions by 3 users not shown)
Line 1:
{{short description|Error correction code}}
In [[coding theory]], the '''Bose&ndash;Chaudhuri&ndash;Hocquenghem codes''' ('''BCH codes''') form a class of [[cyclic code|cyclic]] [[Error correction code|error-correcting codes]] that are constructed using [[polynomial]]s over a [[finite field]] (also called a ''[[Finite field|Galois field]]''). BCH codes were invented in 1959 by French mathematician [[Alexis Hocquenghem]], and independently in 1960 by [[Raj Chandra Bose]] and [[D. K. Ray-Chaudhuri]].<ref>{{Harvnb|Reed|Chen|1999|p=189}}</ref><ref>{{harvnb|Hocquenghem|1959}}</ref><ref>{{harvnb|Bose|Ray-Chaudhuri|1960}}</ref> The name ''Bose&ndash;Chaudhuri&ndash;Hocquenghem'' (and the acronym ''BCH'') arises from the initials of the inventors' surnames (mistakenly, in the case of Ray-Chaudhuri).
 
One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an [[Abstract algebra|algebraic]] method known as [[syndrome decoding]]. This simplifies the design of the decoder for these codes, using small low-power electronic hardware.
 
BCH codes are used in applications such as satellite communications,<ref>{{cite web|title=Phobos Lander Coding System: Software and Analysis|url=http://ipnpr.jpl.nasa.gov/progress_report/42-94/94V.PDF |archive-url=https://ghostarchive.org/archive/20221009/http://ipnpr.jpl.nasa.gov/progress_report/42-94/94V.PDF |archive-date=2022-10-09 |url-status=live|access-date=25 February 2012}}</ref> [[compact disc]] players, [[DVD]]s, [[Disk storage|disk drives]], [[USB flash drive]]s, [[solid-state drive]]s,<ref>{{cite book|chapter=BCH Codes for Solid-State-Drives|doi=10.1007/978-981-13-0599-3_11 |chapter-url=https://link.springer.com/chapter/10.1007/978-981-13-0599-3_11|access-date=23 September 2023 |title=Inside Solid State Drives (SSDS) |series=Springer Series in Advanced Microelectronics |date=2018 |last1=Marelli |first1=Alessia |last2=Micheloni |first2=Rino |volume=37 |pages=369–406 |isbn=978-981-13-0598-6 }}</ref> and [[Bar codesBarcode|two-dimensional bar codes]].
 
== Definition and illustration ==
Line 82:
 
== Properties ==
 
The generator polynomial of a BCH code has degree at most <math>(d-1)m</math>. Moreover, if <math>q=2</math> and <math>c=1</math>, the generator polynomial has degree at most <math>dm/2</math>.
{{Collapse top|title=Proof}}
Line 135 ⟶ 134:
 
=== Non-systematic encoding: The message as a factor ===
 
The most straightforward way to find a polynomial that is a multiple of the generator is to compute the product of some arbitrary polynomial and the generator. In this case, the arbitrary polynomial can be chosen using the symbols of the message as coefficients.
 
Line 153 ⟶ 151:
 
=== Systematic encoding: The message as a prefix ===
 
A systematic code is one in which the message appears verbatim somewhere within the codeword. Therefore, systematic BCH encoding involves first embedding the message polynomial within the codeword polynomial, and then adjusting the coefficients of the remaining (non-message) terms to ensure that <math>s(x)</math> is divisible by <math>g(x)</math>.
 
Line 211 ⟶ 208:
====Peterson–Gorenstein–Zierler algorithm====
<!-- this confuses t (max number of errors that can be corrected) with ν (actual number of errors) -->
[[Peterson's algorithm]] is the step 2 of the generalized BCH decoding procedure. Peterson's algorithm is used to calculate the error locator polynomial coefficients <math> \lambda_1 , \lambda_2, \dots, \lambda_{v} </math> of a polynomial
 
: <math> \Lambda(x) = 1 + \lambda_1 x + \lambda_2 x^2 + \cdots + \lambda_v x^v .</math>
Line 410 ⟶ 407:
:<math>R(x) = C(x) + x^{13} + x^5 = x^{14} + x^{11} + x^{10} + x^9 + x^5 + x^4 + x^2</math>
In order to correct the errors, first calculate the syndromes. Taking <math>\alpha = 0010,</math> we have <math>s_1 = R(\alpha^1) = 1011,</math> <math>s_2 = 1001,</math> <math>s_3 = 1011,</math> <math>s_4 = 1101,</math> <math>s_5 = 0001,</math> and <math>s_6 = 1001.</math>
Next, apply the Peterson procedure by row-reducing the following [[augmented matrix]].
:<math>\left [ S_{3 \times 3} | C_{3 \times 1} \right ] =
\begin{bmatrix}s_1&s_2&s_3&s_4\\
Line 429 ⟶ 426:
 
==== Decoding with unreadable characters ====
Suppose the same scenario, but the received word has two unreadable characters [ 1 {{color|red|0}} 0 ? 1 1 ? 0 0 {{color|red|1}} 1 0 1 0 0 ]. We replace the unreadable characters by zeros while creating the polynomial reflecting their positions <math>\Gamma(x) = \left(\alpha^8x - 1\right)\left(\alpha^{11}x - 1\right).</math> We compute the syndromes <math>s_1=\alpha^{-7}, s_2=\alpha^{1}, s_3=\alpha^{4}, s_4=\alpha^{2}, s_5=\alpha^{5},</math> and <math>s_6=\alpha^{-7}.</math> (Using log notation which is independent on GF(2<sup>4</sup>) isomorphisms. For computation checking we can use the same representation for addition as was used in previous example. [[Hexadecimal]] description of the powers of <math>\alpha</math> are consecutively 1,2,4,8,3,6,C,B,5,A,7,E,F,D,9 with the addition based on bitwise xor.)
 
Let us make syndrome polynomial
Line 719 ⟶ 716:
|publisher = University at Buffalo
|url = http://www.cse.buffalo.edu/~atri/courses/coding-theory/
|archive-url = https://web.archive.org/web/2010070212065020121218004156/http://www.cse.buffalo.edu:80/~atri/courses/coding-theory/
|access-date = April 21, 2010
|archive-date = 20102012-0712-0218
|archive-url = https://web.archive.org/web/20100702120650/http://www.cse.buffalo.edu/~atri/courses/coding-theory/
|archive-date = 2010-07-02
|url-status = dead
}}