Content deleted Content added
→Remarks: add some maths notation |
m MOS:MATHSPECIAL / convert special characters found by Wikipedia:Typo Team/moss (via WP:JWB) |
||
(33 intermediate revisions by 24 users not shown) | |||
Line 1:
{{Use dmy dates|date=July 2014}}
In [[coding theory]], '''concatenated codes''' form a class of [[error-correcting code]]s that are derived by combining an '''inner code''' and an '''outer code'''. They were conceived in 1966 by [[Dave Forney]] as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and [[polynomial-time]] decoding [[
{{cite journal
|author=
|author-link=Dave Forney
|title=Concatenated codes
|publisher=MIT Press
Line 13 ⟶ 15:
The field of [[channel coding]] is concerned with sending a stream of data at the highest possible rate over a given [[communications channel]], and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology.
[[Noisy-
In his [https://web.archive.org/web/20121012080412/http://mitpress.mit.edu/catalog/item/default.asp?tid=5813&ttype=2 doctoral thesis], [[Dave Forney]] showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length.
==Description==
[[File:
[[File:Concatenation of Reed–Solomon code with Hadamard code.svg|thumb|400px|This is a pictorial representation of a code concatenation, and, in particular, the [[Reed–Solomon code]] with n=q=4 and k=2 is used as the outer code and the [[Hadamard code]] with n=q and k=log q is used as the inner code. Overall, the concatenated code is a <math>[q^2,k \log q]</math>-code.]]
Let ''C''<sub>''in''</sub> be a [''n'', ''k'', ''d''] code, that is, a [[block code]] of length ''n'', [[dimension (vector space)|dimension]] ''k'', minimum [[Hamming distance]] ''d'', and [[code rate|rate]] ''r'' = ''
:<math>C_{in}: A^k \rightarrow A^n</math>
Let ''C''<sub>''out''</sub> be a [''N'', ''K'', ''D''] code over an alphabet ''B'' with |''B''| = |''A''|<sup>''k''</sup> symbols:
:<math>C_{out}: B^K \rightarrow B^N</math>
The inner code ''C''<sub>''in''</sub> takes one of |''A''|<sup>''k''</sup> = |''B''| possible inputs, encodes into an ''n''-tuple over ''A'', transmits, and decodes into one of |''B''| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet ''B''. We use this channel ''N'' times to transmit each of the ''N'' symbols in a codeword of ''C''<sub>''out''</sub>. The ''concatenation'' of ''C''<sub>''out''</sub> (as outer code) with ''C''<sub>''in''</sub> (as inner code), denoted ''C''<sub>''out''</sub>
:<math>C_{out} \circ C_{in}: A^{kK} \rightarrow A^{nN}</math>
It maps each input message ''m'' = (''m''<sub>1</sub>, ''m''<sub>2</sub>, ..., ''m''<sub>K</sub>) to a codeword (''C''<sub>''in''</sub>(''m''<nowiki>'</nowiki><sub>1</sub>), ''C''<sub>''in''</sub>(''m''<nowiki>'</nowiki><sub>2</sub>), ..., ''C''<sub>''in''</sub>(''m''<nowiki>'</nowiki><sub>N</sub>)),
Line 31 ⟶ 34:
The ''key insight'' in this approach is that if ''C''<sub>''in''</sub> is decoded using a [[maximum likelihood decoding|maximum-likelihood approach]] (thus showing an exponentially decreasing error probability with increasing length), and ''C''<sub>''out''</sub> is a code with length ''N'' = 2<sup>''nr''</sup> that can be decoded in polynomial time of ''N'', then the concatenated code can be decoded in polynomial time of its combined length ''n''2<sup>''nr''</sup> = [[O notation|''O'']](''N''⋅log(''N'')) and shows an exponentially decreasing error probability, even if ''C''<sub>''in''</sub> has exponential decoding complexity.<ref name="Forney"/> This is discussed in more detail in section [[#Decoding concatenated codes|Decoding concatenated codes]].
In a generalization of above concatenation, there are ''N'' possible inner codes ''C''<sub>''in'',''i''</sub> and the ''i''-th symbol in a codeword of ''C''<sub>''out''</sub> is transmitted across the inner channel using the ''i''-th inner code. The [[Justesen code]]s are examples of generalized concatenated codes, where the outer code is a [[
==Properties==
'''1.''' The distance of the concatenated code ''C''<sub>''out''</sub>
''Proof:''
Line 46 ⟶ 49:
:<math>\Delta(C_{in}(C_{out}(m^1)), C_{in}(C_{out}(m^2))) \ge dD.</math>
'''2.''' If ''C''<sub>''out''</sub> and ''C''<sub>''in''</sub> are [[linear block code]]s, then ''C''<sub>''out''</sub>
This property can be easily shown based on the idea of defining a [[generator matrix]] for the concatenated code in terms of the generator matrices of ''C''<sub>''out''</sub> and ''C''<sub>''in''</sub>.
Line 52 ⟶ 55:
==Decoding concatenated codes==
A natural concept for a decoding algorithm for concatenated codes is to
In detail, let the input to the decoder be the vector ''y'' = (''y''<sub>1</sub>, ..., ''y''<sub>''N''</sub>) ∈ (''A''<sup>''n''</sup>)<sup>''N''</sup>. Then the decoding algorithm is a two-step process:
Line 64 ⟶ 67:
The decoding algorithm described above can be used to correct all errors up to less than ''dD''/4 in number. Using [[minimum distance decoding]], the outer decoder can correct all inputs ''y''<nowiki>'</nowiki> with less than ''D''/2 symbols ''y''<nowiki>'</nowiki><sub>''i''</sub> in error. Similarly, the inner code can reliably correct an input ''y''<sub>''i''</sub> if less than ''d''/2 inner symbols are erroneous. Thus, for an outer symbol ''y''<nowiki>'</nowiki><sub>''i''</sub> to be incorrect after inner decoding at least ''d''/2 inner symbols must have been in error, and for the outer code to fail this must have happened for at least ''D''/2 outer symbols. Consequently, the total number of inner symbols that must be received incorrectly for the concatenated code to fail must be at least ''d''/2⋅''D''/2 = ''dD''/4.
The algorithm also works if the inner codes are different, e.g., for [[Justesen code]]s. The [[generalized minimum distance decoding|generalized minimum distance algorithm]], developed by Forney, can be used to correct up to ''dD''/2 errors.<ref name="gmd">
{{cite journal
|first=G. David
|last=Forney
|title=Generalized Minimum Distance Decoding
|journal=IEEE Transactions on Information Theory
|volume=12
|issue=2
|pages=125–131
|date=April 1966
|doi=10.1109/TIT.1966.1053873
}}</ref>
It uses [[erasure code|erasure]] information from the inner code to improve performance of the outer code, and was the first example of an algorithm using [[soft-decision decoding]].<ref>{{cite journal
|first1=Christopher C.H.
|last1=Yu
|first2=Daniel J.
|last2=Costello
|title=Generalized Minimum Distance Decoding for ''Q''ary Output Channels
|journal=IEEE Transactions on Information Theory
|volume=26
|issue=2
|pages=238–243
|date=March 1980
|doi=10.1109/TIT.1980.1056148
}}</ref><ref>{{cite journal
|first1=Yingquan
|last1=Wu
|first2=Christoforos
|last2=Hadjicostis
|title=Soft-Decision Decoding of Linear Block Codes Using Preprocessing and Diversification
|journal=IEEE Transactions on Information Theory
|volume=53
|issue=1
|pages=387–393
|date=January 2007
|doi=10.1109/tit.2006.887478
|s2cid=8338433
}}</ref>
==Applications==
Although a simple concatenation scheme was implemented already for the 1971 [[Mariner 8|Mariner]] Mars orbiter mission,<ref name="McEliece"/> concatenated codes were starting to be regularly used for [[Deep Space Network|deep space]] communication with the [[Voyager program]], which launched
Typically, the inner code is not a block code but a [[
{{cite journal
|author=J. P. Odenwalder
|title=Optimal decoding of convolutional codes
|publisher=[[U.C.L.A.]], Systems Science Dept.
|year=1970
}}
</ref>
For the outer code, a longer hard-decision block code, frequently a [[Reed
{{cite journal
|author1=
|author-link=Robert |author2=Laif Swanson
|title=
|publisher=JPL
|date=20
}}
</ref>
The larger symbol size makes the outer code more robust to [[error burst
The combination of an inner Viterbi convolutional code with an outer
In a
A simple concatenation scheme is also used on the
== Turbo codes: A parallel concatenation approach ==
The description above is given for what is now called a serially concatenated code. [[Turbo code]]s, as described first in 1993, implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that
However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was
==See
*[[Gilbert–Varshamov bound]]
*[[Justesen code]]
*[[Singleton bound]]
*[[
== References ==
{{reflist|33em}}▼
▲{{reflist}}
== Further reading ==
* {{cite book |author1=Shu Lin |author2=Daniel J. Costello Jr. | title=Error Control Coding: Fundamentals and Applications |url=https://archive.org/details/errorcontrolcodi00lins_044 |url-access=limited | publisher=[[Prentice Hall]] | year=1983 | isbn=978-0-13-283796-5 | pages=[https://archive.org/details/errorcontrolcodi00lins_044/page/n296 278]–280}}
* {{cite book | author=
== External links ==
* {{scholarpedia|title=Concatenated codes|urlname=Concatenated_codes|curator=[[Dave Forney]]}}
* [https://web.archive.org/web/20110606191907/http://www.cse.buffalo.edu/~atri/courses/coding-theory/fall07.html University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra]
{{CCSDS|state=collapsed}}
|