Block code: Difference between revisions

Content deleted Content added
No edit summary
Tags: Mobile edit Mobile web edit
m unpiped links using script
 
(37 intermediate revisions by 24 users not shown)
Line 1:
{{Short description|Family of error-correcting codes that encode data in blocks}}
{{More footnotes needed|date=February 2025}}
 
In [[coding theory]], '''block codes''' are a large and important family of [[Channel coding|error-correcting codes]] that encode data in blocks.
There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, [[mathematics|mathematiciansmathematician]]s, and [[computer science|computer scientistsscientist]]s to study the limitations of ''all'' block codes in a unified way.
Such limitations often take the form of ''bounds'' that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
 
Examples of block codes are [[Reed–Solomon code]]s, [[Hamming code]]s, [[Hadamard code]]s, [[Expander code]]s, [[Golay code (disambiguation)|Golay code]]s, [[Reed–Muller code]]s and [[Reed–MullerPolar code (coding theory)|Polar code]]s. These examples also belong to the class of [[linear code]]s, and hence they are called '''linear block codes'''. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using booleanBoolean polynomials.
 
Algebraic block codes are typically [[Soft-decision decoder|hard-decoded]] using algebraic decoders.{{Technical statement|date=May 2015}}
Line 24 ⟶ 27:
The data stream to be encoded is modeled as a [[string (computer science)|string]] over some '''alphabet''' <math>\Sigma</math>. The size <math>|\Sigma|</math> of the alphabet is often written as <math>q</math>. If <math>q=2</math>, then the block code is called a ''binary'' block code. In many applications it is useful to consider <math>q</math> to be a [[prime power]], and to identify <math>\Sigma</math> with the [[finite field]] <math>\mathbb F_q</math>.
 
=== The message length ''k'' ===
<math>k</math> in Morse code is ?????
Messages are elements <math>m</math> of <math>\Sigma^k</math>, that is, strings of length <math>k</math>.
Hence the number <math>k</math> is called the '''message length''' or '''dimension''' of a block code.
 
=== The block length ''n'' ===
Line 41 ⟶ 46:
Then the minimum distance <math>d</math> of the code <math>C</math> is defined as
:<math>d := \min_{m_1,m_2\in\Sigma^k\atop m_1\neq m_2} \Delta[C(m_1),C(m_2)]</math>.
Since any code has to be [[injective]], any two codewords will disagree in at least one position, so the distance of any code is at least <math>1</math>. Besides, the '''distance''' equals the '''[[Hamming weight#Minimum weight|minimum weight]]''' for linear block codes because:{{cn|date=December 2024}}
:<math>\min_{m_1,m_2\in\Sigma^k\atop m_1\neq m_2} \Delta[C(m_1),C(m_2)] = \min_{m_1,m_2\in\Sigma^k\atop m_1\neq m_2} \Delta[\mathbf{0},C(m_1m_2)+-C(m_2m_1)] = \min_{m\in\Sigma^k\atop m\neq\mathbf{0}} w[C(m)] = w_\min</math>.
 
A larger distance allows for more error correction and detection.
Line 65 ⟶ 70:
== Error detection and correction properties ==
 
A codeword <math>c \in \Sigma^n</math>could be considered as a point in the <math>n</math>-dimension space <math>\Sigma^n</math> and the code <math>\mathcal{C}</math> is the subset of <math>\Sigma^n</math>. A code <math>\mathcal{C}</math> has distance <math>d</math> means that <math>\forall c\in \mathcal{C}</math>, there is no other codeword in the ''[[Hamming ball'']] centered at <math>c</math> with radius <math>d-1</math>, which is defined as the collection of <math>n</math>-dimension words whose ''[[Hamming distance]]'' to <math>c</math> is no more than <math>d-1</math>. Similarly, <math> \mathcal{C}</math> with (minimum) distance <math>d</math> has the following properties:
* <math> \mathcal{C}</math> can detect <math>d-1</math> errors : Because a codeword <math>c</math> is the only codeword in the Hamming ball centered at itself with radius <math>d-1</math>, no error pattern of <math>d-1</math> or fewer errors could change one codeword to another. When the receiver detects that the received vector is not a codeword of <math> \mathcal{C}</math>, the errors are detected (but no guarantee to correct).
* <math> \mathcal{C}</math> can correct <math>\textstyle\left\lfloor {{d-1} \over 2}\right\rfloor</math> errors. Because a codeword <math>c</math> is the only codeword in the Hamming ball centered at itself with radius <math>d-1</math>, the two Hamming balls centered at two different codewords respectively with both radius <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math> do not overlap with each other. Therefore, if we consider the error correction as finding the codeword closest to the received word <math>y</math>, as long as the number of errors is no more than <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math>, there is only one codeword in the hamming ball centered at <math>y</math> with radius <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math>, therefore all errors could be corrected.
Line 72 ⟶ 77:
 
== Lower and upper bounds of block codes ==
[[File:HammingLimit.png|thumb|720px|Hamming limit{{clarify|reason='Base' from y-axis legend does not occur in this article's textual content.|date=January 2022}}]]
[[File:Linear Binary Block Codes and their needed Check Symbols.png|thumb|720px|
There are theoretical limits (such as the Hamming limit), but another question is which codes can actually constructed.{{clarify|reason='Base' from y-axis legend does not occur in this article's textual content.|date=January 2022}} It is like [[Sphere packing|packing spheres in a box]] in many dimensions. This diagram shows the constructible codes, which are linear and binary. The ''x'' axis shows the number of protected symbols ''k'', the ''y'' axis the number of needed check symbols ''n–k''. Plotted are the limits for different Hamming distances from 1 (unprotected) to 34.
Marked with dots are perfect codes:
{{bulleted list
Line 160 ⟶ 165:
* [[Shannon–Hartley theorem]]
* [[Noisy channel]]
* [[List decoding]]<ref name="schlegel" />
* [[Sphere packing]]
 
Line 166 ⟶ 171:
 
{{reflist}}
{{refbegin}}
 
* {{cite book | author=J.H. van Lint | authorlink=Jack van Lint | title=Introduction to Coding Theory | edition=2nd | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | year=1992 | isbn=3-540-54894-7 | page=[https://archive.org/details/introductiontoco0000lint/page/31 31] | url=https://archive.org/details/introductiontoco0000lint/page/31 }}
{{Refimprove|date=September 2008}}
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams |author2=N.J.A. Sloane |authorlink2=Neil Sloane | title=The Theory of Error-Correcting Codes | url=https://archive.org/details/theoryoferrorcor0000macw | url-access=registration | publisher=North-Holland | year=1977 | isbn=0-444-85193-3 | page=[https://archive.org/details/theoryoferrorcor0000macw/page/35 35]}}
 
* {{cite book | author=J.HW. vanHuffman Lint|author2=V.Pless | authorlinkauthorlink2=JackVera van LintPless | title=Introduction toFundamentals Codingof Theoryerror-correcting codes | editionurl=2ndhttps://archive.org/details/fundamentalsofer0000huff | publisherurl-access=Springer-Verlagregistration | seriespublisher=[[GraduateCambridge TextsUniversity in Mathematics|GTM]] | volume=86Press | year=19922003 | isbn=3978-5400-54894521-78280-7 | page=31}}
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams |author2=N.J.A. Sloane |authorlink2=Neil Sloane | title=The Theory of Error-Correcting Codes | publisher=North-Holland | year=1977 | isbn=0-444-85193-3 | page=35}}
* {{cite book | author=W. Huffman |author2=V.Pless | authorlink2=Vera Pless | title= Fundamentals of error-correcting codes | publisher=Cambridge University Press | year=2003 | isbn=978-0-521-78280-7}}
* {{cite book | author=S. Lin |author2=D. J. Jr. Costello | title= Error Control Coding: Fundamentals and Applications | publisher=Prentice-Hall | year=1983 | isbn=0-13-283796-X}}
{{refend}}
 
== External links ==