Content deleted Content added
Adumbrativus (talk | contribs) m Take prime symbol out of superscript |
No edit summary |
||
(18 intermediate revisions by 13 users not shown) | |||
Line 1:
{{Short description|Class of error-correcting code}}
In [[coding theory]], a '''linear code''' is an [[error-correcting code]] for which any [[linear combination]] of [[
Linear codes are used in [[forward error correction]] and are applied in methods for transmitting symbols (e.g., [[bit]]s) on a [[communications channel]] so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent.<ref name="DrMacKayECC">{{cite book| last=MacKay | first=David, J.C.| authorlink=David J.C. MacKay| title=Information Theory, Inference, and Learning Algorithms |year=2003 | pages=9|publisher=[[Cambridge University Press]]| isbn=9780521642989|url=http://www.inference.phy.cam.ac.uk/itprnn/book.pdf | quote="In a ''linear'' block code, the extra <math>N - K</math> bits are linear functions of the original <math>K</math> bits; these extra bits are called ''parity-check bits''"| bibcode=2003itil.book.....M}}</ref> A linear code of length ''n'' transmits blocks containing ''n'' symbols. For example, the [7,4,3] [[Hamming code]] is a linear [[binary code]] which represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected.<ref name="Cover_and_Thomas">{{cite book|title=Elements of Information Theory|author=Thomas M. Cover and Joy A. Thomas|pages=[https://archive.org/details/elementsofinform0000cove/page/210 210–211]|year=1991|publisher=John Wiley & Sons, Inc|isbn=978-0-471-06259-2|url=https://archive.org/details/elementsofinform0000cove/page/210}}</ref> This code contains 2<sup>4</sup> = 16 codewords.
==Definition and parameters==
A '''linear code''' of length ''n'' and
The '''weight''' of a codeword is the number of its elements that are nonzero and the '''distance''' between two codewords is the [[Hamming distance]] between them, that is, the number of elements in which they differ. The distance ''d'' of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of length ''n'', dimension ''k'', and distance ''d'' is called an [''n'',''k'',''d''] code (or, more precisely, <math>[n,k,d]_q</math> code).
We want to give <math>\mathbb{F}_q^n</math> the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (a [[binary symmetric channel]]). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to.
== Generator and check matrices ==
As a [[linear subspace]] of <math>\mathbb{F}_q^n</math>, the entire code ''C'' (which may be very large) may be represented as the [[span (linear algebra)|span]] of a set of <math>k</math> codewords (known as a [[basis (linear algebra)|basis]] in [[linear algebra]]). These basis codewords are often collated in the rows of a matrix G known as a '''[[Generator matrix|generating matrix]]''' for the code ''C''. When G has the block matrix form <math>\boldsymbol{G} = [I_k
A matrix ''H'' representing a linear function <math>\phi : \mathbb{F}_q^n\to \mathbb{F}_q^{n-k}</math> whose [[Kernel (matrix)|kernel]] is ''C'' is called a '''[[check matrix]]''' of ''C'' (or sometimes a parity check matrix). Equivalently, ''H'' is a matrix whose [[null space]] is ''C''. If ''C'' is a code with a generating matrix ''G'' in standard form, <math>\boldsymbol{G} = [I_k
Linearity guarantees that the minimum [[Hamming distance]] ''d'' between a codeword ''c''<sub>0</sub> and any of the other codewords ''c'' ≠ ''c''<sub>0</sub> is independent of ''c''<sub>0</sub>. This follows from the property that the difference ''c'' − ''c''<sub>0</sub> of two codewords in ''C'' is also a codeword (i.e., an [[element (mathematics)|element]] of the subspace ''C''), and the property that ''d''(''c'', c<sub>0</sub>) = ''d''(''c'' − ''c''<sub>0</sub>, 0). These properties imply that
Line 23 ⟶ 24:
The distance ''d'' of a linear code ''C'' also equals the minimum number of linearly dependent columns of the check matrix ''H''.
''Proof:'' Because <math>\boldsymbol{H} \cdot \boldsymbol{c}^T = \boldsymbol{0}</math>, which is equivalent to <math>\sum_{i=1}^n (c_i \cdot \boldsymbol{H_i}) = \boldsymbol{0}</math>, where <math>\boldsymbol{H_i}</math> is the <math>i^{th}</math> column of <math>\boldsymbol{H}</math>. Remove those items with <math>c_i=0</math>, those <math>\boldsymbol{H_i}</math> with <math>c_i \neq 0</math> are linearly dependent. Therefore, <math>d</math> is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns <math>\{ \boldsymbol{H_j}
== Example: Hamming codes ==
Line 33 ⟶ 34:
'''Example :''' The linear block code with the following generator matrix and parity check matrix is a <math> [7,4,3]_2</math> Hamming code.
: <math>\boldsymbol{G}=\begin{pmatrix} 1
== Example: Hadamard codes ==
Line 42 ⟶ 43:
'''Example:''' The linear block code with the following generator matrix is a <math> [8,3,4]_2</math> Hadamard code:
<math>\boldsymbol{G}_\mathrm{Had}=\begin{pmatrix} 0
[[Hadamard code]] is a special case of [[Reed–Muller code]]. If we take the first column (the all-zero column) out from <math>\boldsymbol{G}_\mathrm{Had}</math>, we get <math>[7,3,4]_2</math> ''simplex code'', which is the ''dual code '' of Hamming code.
==Nearest neighbor algorithm==
Line 50 ⟶ 51:
The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm):
Input: A ''received vector'' v in <math>\mathbb{F}_q^n.</math>
Output: A codeword <math>w</math> in <math>C</math> closest to <math>v</math>, if any.
Line 63 ⟶ 64:
==Popular notation==
{{main|Block code#Popular notation}}
[[Code]]s in general are often denoted by the letter ''C'', and a code of length ''n'' and of [[dimension (linear algebra)|rank]] ''k'' (i.e., having ''
(The [''n'', ''k'', ''d''] notation should not be confused with the (''n'', ''M'', ''d'') notation used to denote a ''non-linear'' code of length ''n'', size ''M'' (i.e., having ''M'' code words), and minimum Hamming distance ''d''.)
Line 69 ⟶ 70:
==Singleton bound==
''Lemma'' ([[Singleton bound]]): Every linear [''n'',''k'',''d''] code C satisfies <math>k+d \leq n+1</math>.
A code ''C'' whose parameters satisfy ''k'' +''d'' = ''n'' + 1 is called '''maximum distance separable''' or '''MDS'''. Such codes, when they exist, are in some sense best possible.
If ''C''<sub>1</sub> and ''C''<sub>2</sub> are two codes of length ''n'' and if there is a permutation ''p'' in the [[symmetric group]] ''S''<sub>''n''</sub> for which (''c''<sub>1</sub>,...,''c''<sub>''n''</sub>) in ''C''<sub>1</sub> if and only if (''c''<sub>''p''(1)</sub>,...,''c''<sub>''p''(''n'')</sub>) in ''C''<sub>2</sub>, then we say ''C''<sub>1</sub> and ''C''<sub>2</sub> are '''permutation equivalent'''. In more generality, if there is an <math>n\times n</math> [[monomial matrix]] <math>M\colon \mathbb{F}_q^n \to \mathbb{F}_q^n</math> which sends ''C''<sub>1</sub> isomorphically to ''C''<sub>2</sub> then we say ''C''<sub>1</sub> and ''C''<sub>2</sub> are '''equivalent'''.
''Lemma'': Any linear code is permutation equivalent to a code which is in standard form.
==Bonisoli's theorem==
A code is defined to be '''equidistant''' if and only if there exists some constant ''d'' such that the distance between any two of the code's distinct codewords is equal to ''d''.<ref>{{cite
==Examples==
Some examples of linear codes include:
{{div col|colwidth=27em}}
* [[Repetition code]]
* [[Parity code]]
* [[Cyclic code]]
* [[Hamming code]]
* [[Golay code (disambiguation)|Golay code]], both the [[binary Golay code|binary]] and [[ternary Golay code|ternary]] versions
* [[Polynomial code]]s, of which [[BCH code]]s are an example
* [[Reed–Solomon error correction|Reed–Solomon codes]]
* [[Reed–Muller code]]
* [[
* [[Binary Goppa code]]
* [[Low-density parity-check codes]]
* [[Expander code]]
* [[Multidimensional parity-check code]]
* [[Toric code]]
* [[Turbo code]]
* [[Locally recoverable code]]
{{div col end}}
==Generalization==
[[Hamming space]]s over non-field alphabets have also been considered, especially over [[finite ring]]s,
== See also ==
|