Content deleted Content added
m Disambiguating links to Code word (link changed to Code word (communication)) using DisamAssist. |
No edit summary |
||
(8 intermediate revisions by 5 users not shown) | |||
Line 2:
In [[coding theory]], a '''linear code''' is an [[error-correcting code]] for which any [[linear combination]] of [[Code word (communication)|codewords]] is also a codeword. Linear codes are traditionally partitioned into [[block code]]s and [[convolutional code]]s, although [[turbo code]]s can be seen as a hybrid of these two types.<ref>{{cite book|title=Channel Codes: Classical and Modern|url=https://archive.org/details/channelcodesclas00ryan|url-access=limited|author=William E. Ryan and Shu Lin|page=[https://archive.org/details/channelcodesclas00ryan/page/n21 4]|year=2009|publisher=Cambridge University Press|isbn=978-0-521-84868-8}}</ref> Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. [[syndrome decoding]]).{{citation needed|date=April 2018}}
Linear codes are used in [[forward error correction]] and are applied in methods for transmitting symbols (e.g., [[bit]]s) on a [[communications channel]] so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent.<ref name="DrMacKayECC">{{cite book| last=MacKay | first=David, J.C.| authorlink=David J.C. MacKay| title=Information Theory, Inference, and Learning Algorithms |year=2003 | pages=9|publisher=[[Cambridge University Press]]| isbn=9780521642989|url=http://www.inference.phy.cam.ac.uk/itprnn/book.pdf | quote="In a ''linear'' block code, the extra <math>N - K</math> bits are linear functions of the original <math>K</math> bits; these extra bits are called ''parity-check bits''"| bibcode=2003itil.book.....M}}</ref> A linear code of length ''n'' transmits blocks containing ''n'' symbols. For example, the [7,4,3] [[Hamming code]] is a linear [[binary code]] which represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected.<ref name="Cover_and_Thomas">{{cite book|title=Elements of Information Theory|author=Thomas M. Cover and Joy A. Thomas|pages=[https://archive.org/details/elementsofinform0000cove/page/210 210–211]|year=1991|publisher=John Wiley & Sons, Inc|isbn=978-0-471-06259-2|url=https://archive.org/details/elementsofinform0000cove/page/210}}</ref> This code contains 2<sup>4</sup> = 16 codewords.
==Definition and parameters==
Line 12:
== Generator and check matrices ==
As a [[linear subspace]] of <math>\mathbb{F}_q^n</math>, the entire code ''C'' (which may be very large) may be represented as the [[span (linear algebra)|span]] of a set of <math>k</math> codewords (known as a [[basis (linear algebra)|basis]] in [[linear algebra]]). These basis codewords are often collated in the rows of a matrix G known as a '''[[Generator matrix|generating matrix]]''' for the code ''C''. When G has the block matrix form <math>\boldsymbol{G} = [I_k
A matrix ''H'' representing a linear function <math>\phi : \mathbb{F}_q^n\to \mathbb{F}_q^{n-k}</math> whose [[Kernel (matrix)|kernel]] is ''C'' is called a '''[[check matrix]]''' of ''C'' (or sometimes a parity check matrix). Equivalently, ''H'' is a matrix whose [[null space]] is ''C''. If ''C'' is a code with a generating matrix ''G'' in standard form, <math>\boldsymbol{G} = [I_k
Linearity guarantees that the minimum [[Hamming distance]] ''d'' between a codeword ''c''<sub>0</sub> and any of the other codewords ''c'' ≠ ''c''<sub>0</sub> is independent of ''c''<sub>0</sub>. This follows from the property that the difference ''c'' − ''c''<sub>0</sub> of two codewords in ''C'' is also a codeword (i.e., an [[element (mathematics)|element]] of the subspace ''C''), and the property that ''d''(''c'', c<sub>0</sub>) = ''d''(''c'' − ''c''<sub>0</sub>, 0). These properties imply that
Line 24:
The distance ''d'' of a linear code ''C'' also equals the minimum number of linearly dependent columns of the check matrix ''H''.
''Proof:'' Because <math>\boldsymbol{H} \cdot \boldsymbol{c}^T = \boldsymbol{0}</math>, which is equivalent to <math>\sum_{i=1}^n (c_i \cdot \boldsymbol{H_i}) = \boldsymbol{0}</math>, where <math>\boldsymbol{H_i}</math> is the <math>i^{th}</math> column of <math>\boldsymbol{H}</math>. Remove those items with <math>c_i=0</math>, those <math>\boldsymbol{H_i}</math> with <math>c_i \neq 0</math> are linearly dependent. Therefore, <math>d</math> is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns <math>\{ \boldsymbol{H_j}
== Example: Hamming codes ==
Line 34:
'''Example :''' The linear block code with the following generator matrix and parity check matrix is a <math> [7,4,3]_2</math> Hamming code.
: <math>\boldsymbol{G}=\begin{pmatrix} 1
== Example: Hadamard codes ==
Line 43:
'''Example:''' The linear block code with the following generator matrix is a <math> [8,3,4]_2</math> Hadamard code:
<math>\boldsymbol{G}_\mathrm{Had}=\begin{pmatrix} 0
[[Hadamard code]] is a special case of [[Reed–Muller code]]. If we take the first column (the all-zero column) out from <math>\boldsymbol{G}_\mathrm{Had}</math>, we get <math>[7,3,4]_2</math> ''simplex code'', which is the ''dual code '' of Hamming code.
==Nearest neighbor algorithm==
Line 51:
The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm):
Input: A ''received vector'' v in <math>\mathbb{F}_q^n.</math>
Output: A codeword <math>w</math> in <math>C</math> closest to <math>v</math>, if any.
Line 64:
==Popular notation==
{{main|Block code#Popular notation}}
[[Code]]s in general are often denoted by the letter ''C'', and a code of length ''n'' and of [[dimension (linear algebra)|rank]] ''k'' (i.e., having ''
(The [''n'', ''k'', ''d''] notation should not be confused with the (''n'', ''M'', ''d'') notation used to denote a ''non-linear'' code of length ''n'', size ''M'' (i.e., having ''M'' code words), and minimum Hamming distance ''d''.)
Line 70:
==Singleton bound==
''Lemma'' ([[Singleton bound]]): Every linear [''n'',''k'',''d''] code C satisfies <math>k+d \leq n+1</math>.
A code ''C'' whose parameters satisfy ''k'' +''d'' = ''n'' + 1 is called '''maximum distance separable''' or '''MDS'''. Such codes, when they exist, are in some sense best possible.
If ''C''<sub>1</sub> and ''C''<sub>2</sub> are two codes of length ''n'' and if there is a permutation ''p'' in the [[symmetric group]] ''S''<sub>''n''</sub> for which (''c''<sub>1</sub>,...,''c''<sub>''n''</sub>) in ''C''<sub>1</sub> if and only if (''c''<sub>''p''(1)</sub>,...,''c''<sub>''p''(''n'')</sub>) in ''C''<sub>2</sub>, then we say ''C''<sub>1</sub> and ''C''<sub>2</sub> are '''permutation equivalent'''. In more generality, if there is an <math>n\times n</math> [[monomial matrix]] <math>M\colon \mathbb{F}_q^n \to \mathbb{F}_q^n</math> which sends ''C''<sub>1</sub> isomorphically to ''C''<sub>2</sub> then we say ''C''<sub>1</sub> and ''C''<sub>2</sub> are '''equivalent'''.
''Lemma'': Any linear code is permutation equivalent to a code which is in standard form.
Line 84:
Some examples of linear codes include:
{{div col|colwidth=27em}}
* [[Repetition code]]
* [[Parity code]]
* [[Cyclic code]]
* [[Hamming code]]
* [[Golay code (disambiguation)|Golay code]], both the [[binary Golay code|binary]] and [[ternary Golay code|ternary]] versions
* [[Polynomial code]]s, of which [[BCH code]]s are an example
* [[Reed–Solomon error correction|Reed–Solomon codes]]
* [[Reed–Muller code]]
* [[Algebraic geometry code]]
* [[Binary Goppa code]]
* [[Low-density parity-check codes]]
* [[Expander code]]
* [[Multidimensional parity-check code]]
* [[Toric code]]
* [[Turbo code]]
* [[Locally recoverable code]]
{{div col end}}
==Generalization==
[[Hamming space]]s over non-field alphabets have also been considered, especially over [[finite ring]]s, most notably [[Galois ring]]s over [[modular arithmetic|'''Z'''<sub>4</sub>]]. This gives rise to [[module (mathematics)|module]]s instead of vector spaces and [[ring-linear code]]s (identified with [[submodule]]s) instead of linear codes. The typical metric used in this case the [[Lee distance]]. There exist a [[Gray isometry]] between <math>\mathbb{Z}_2^{2m}</math> (i.e. GF(2<sup>
== See also ==
|