Concatenated error correction code: Difference between revisions

Content deleted Content added
BattyBot (talk | contribs)
m fixed CS1 errors: dates & General fixes using AWB (9803)
m MOS:MATHSPECIAL / convert special characters found by Wikipedia:Typo Team/moss (via WP:JWB)
 
(19 intermediate revisions by 15 users not shown)
Line 1:
{{Use dmy dates|date=July 2014}}
In [[coding theory]], '''concatenated codes''' form a class of [[error-correcting code]]s that are derived by combining an '''inner code''' and an '''outer code'''. They were conceived in 1966 by [[Dave Forney]] as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and [[polynomial-time]] decoding [[Computational complexity theory|complexity]].<ref name="Forney">
{{cite journal
|author=[[Dave Forney|G. D. Forney]]
|author-link=Dave Forney
|title=Concatenated codes
|publisher=MIT Press
Line 15 ⟶ 17:
[[Noisy-channel coding theorem|Shannon's channel coding theorem]] shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates <math>R</math> less than a certain threshold <math>C</math>, called the [[channel capacity]] of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length <math>N</math> of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with <math>N</math>, so such an optimum decoder rapidly becomes infeasible.
 
In his [https://web.archive.org/web/20121012080412/http://mitpress.mit.edu/catalog/item/default.asp?tid=5813&ttype=2 doctoral thesis], [[Dave Forney]] showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length.
 
==Description==
Line 25 ⟶ 27:
Let ''C''<sub>''out''</sub> be a [''N'', ''K'', ''D''] code over an alphabet ''B'' with |''B''| = |''A''|<sup>''k''</sup> symbols:
:<math>C_{out}: B^K \rightarrow B^N</math>
The inner code ''C''<sub>''in''</sub> takes one of |''A''|<sup>''k''</sup> = |''B''| possible inputs, encodes into an ''n''-tuple over ''A'', transmits, and decodes into one of |''B''| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet ''B''. We use this channel ''N'' times to transmit each of the ''N'' symbols in a codeword of ''C''<sub>''out''</sub>. The ''concatenation'' of ''C''<sub>''out''</sub> (as outer code) with ''C''<sub>''in''</sub> (as inner code), denoted ''C''<sub>''out''</sub><math>\circ</math>''C''<sub>''in''</sub>, is thus a code of length ''Nn'' over the alphabet ''A'':<ref name="Forney"/>
:<math>C_{out} \circ C_{in}: A^{kK} \rightarrow A^{nN}</math>
It maps each input message ''m'' = (''m''<sub>1</sub>, ''m''<sub>2</sub>, ..., ''m''<sub>K</sub>) to a codeword (''C''<sub>''in''</sub>(''m''<nowiki>'</nowiki><sub>1</sub>), ''C''<sub>''in''</sub>(''m''<nowiki>'</nowiki><sub>2</sub>), ..., ''C''<sub>''in''</sub>(''m''<nowiki>'</nowiki><sub>N</sub>)),
Line 35 ⟶ 37:
 
==Properties==
'''1.''' The distance of the concatenated code ''C''<sub>''out''</sub><math>\circ</math>''C''<sub>''in''</sub> is at least ''dD'', that is, it is a [''nN'', ''kK'', ''D''<nowiki>'</nowiki>] code with ''D''<nowiki>'</nowiki> ≥ ''dD''.
 
''Proof:''
Line 47 ⟶ 49:
:<math>\Delta(C_{in}(C_{out}(m^1)), C_{in}(C_{out}(m^2))) \ge dD.</math>
 
'''2.''' If ''C''<sub>''out''</sub> and ''C''<sub>''in''</sub> are [[linear block code]]s, then ''C''<sub>''out''</sub><math>\circ</math>''C''<sub>''in''</sub> is also a linear block code.
 
This property can be easily shown based on the idea of defining a [[generator matrix]] for the concatenated code in terms of the generator matrices of ''C''<sub>''out''</sub> and ''C''<sub>''in''</sub>.
Line 53 ⟶ 55:
==Decoding concatenated codes==
 
A natural concept for a decoding algorithm for concatenated codes is to firstfirst decode the inner code and then the outer code. For the algorithm to be practical it must be [[polynomial-time]] in the final block length. Consider that there is a polynomial-time unique decoding algorithm for the outer code. Now we have to find a polynomial-time decoding algorithm for the inner code. It is understood that polynomial running time here means that running time is polynomial in the final block length. The main idea is that if the inner block length is selected to be logarithmic in the size of the outer code then the decoding algorithm for the inner code may run in [[exponential time]] of the inner block length, and we can thus use an exponential-time but optimal [[Decoding methods#Maximum likelihood decoding|maximum likelihood decoder]] (MLD) for the inner code.
 
In detail, let the input to the decoder be the vector ''y'' = (''y''<sub>1</sub>, ..., ''y''<sub>''N''</sub>) ∈ (''A''<sup>''n''</sup>)<sup>''N''</sup>. Then the decoding algorithm is a two-step process:
Line 70 ⟶ 72:
|last=Forney
|title=Generalized Minimum Distance Decoding
|journal=IEEE Transactions on Information Theory
|volume=12
|issue=2
|pages=125−131125–131
|publisher=IEEE
|date=April 1966
|doi=10.1109/TIT.1966.1053873
}}</ref>
It uses [[erasure code|erasure]] information from the inner code to improve performance of the outer code, and was the first example of an algorithm using [[soft-decision decoding]].<ref>{{cite journal
Line 83 ⟶ 85:
|last2=Costello
|title=Generalized Minimum Distance Decoding for ''Q''ary Output Channels
|journal=IEEE Transactions on Information Theory
|volume=26
|issue=2
|pages=238−243238–243
|publisher=IEEE
|date=March 1980
|doi=10.1109/TIT.1980.1056148
}}</ref><ref>{{cite journal
|first1=Yingquan
Line 95 ⟶ 97:
|last2=Hadjicostis
|title=Soft-Decision Decoding of Linear Block Codes Using Preprocessing and Diversification
|journal=IEEE Transactions on Information Theory
|volume=53
|issue=1
|pages=387−393387–393
|publisher=IEEE
|date=January 2007
|doi=10.1109/tit.2006.887478
|s2cid=8338433
}}</ref>
 
==Applications==
Although a simple concatenation scheme was implemented already for the 1971 [[Mariner 8|Mariner]] Mars orbiter mission,<ref name="McEliece"/> concatenated codes were starting to be regularly used for [[Deep Space Network|deep space]] communication with the [[Voyager program]], which launched two [[space probe]]s in 1977.<ref name="deep-space-codes">K. Andrews et al., ''The Development of Turbo and LDPC Codes for Deep-Space Applications'', Proceedings of the IEEE, Vol. 95, No. 11, Nov. 2007.</ref> Since then, concatenated codes became the workhorse for efficient error correction coding, and stayed so at least until the invention of [[turbo codes]] and [[LDPC codes]].<ref name="McEliece"/><ref name="deep-space-codes"/>
 
Typically, the inner code is not a block code but a [[soft-decision decoder|soft-decision]] [[convolutional code|convolutional]] [[Viterbi decoder|Viterbi-decoded]] code with a short constraint length.<ref name="Odenwalder">
Line 110 ⟶ 113:
|author=J. P. Odenwalder
|title=Optimal decoding of convolutional codes
|publisher=[[U.C.L.A.]], Systems Science Dept. ''(dissertation)''
|year=1970
}}
Line 116 ⟶ 119:
For the outer code, a longer hard-decision block code, frequently a [[Reed-Solomon code]] with eight-bit symbols, is used.<ref name="Forney"/><ref name="McEliece">
{{cite journal
|author1=[[Robert J. McEliece
|author-link=Robert J. McEliece]]
|author2=Laif Swanson
|title=Reed–Solomon Codes and the Exploration of the Solar System
|publisher=JPL
|date=20 AugAugust 1993
}}
</ref>
Line 129 ⟶ 133:
In a looser sense, any (serial) combination of two or more codes may be referred to as a concatenated code. For example, within the [[DVB-S2]] standard, a highly efficient [[LDPC code]] is combined with an algebraic outer code in order to remove any resilient errors left over from the inner LDPC code due to its inherent [[error floor]].<ref>[http://www.etsi.org/deliver/etsi_en/302300_302399/302307/01.02.01_60/en_302307v010201p.pdf Digital Video Broadcasting (DVB); Second generation framing structure, channel coding and modulation systems for Broadcasting, Interactive Services, News Gathering and other broadband satellite applications (DVB-S2)], [[ETSI]] EN 302 307, V1.2.1, April 2009.</ref>
 
A simple concatenation scheme is also used on the [[compact disc]] (CD), where an interleaving layer between two Reed–Solomon codes of different sizes spreads errors across various blocks.
 
== Turbo codes: A parallel concatenation approach ==
The description above is given for what is now called a serially concatenated code. [[Turbo code]]s, as described first in [[1993]], implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that passes information forth and back between the codes.<ref name="deep-space-codes"/> This design has a better performance than any previously conceived concatenated codes.
 
However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was implemented with two to five iterations in the "Galileo code" of the [[Galileo (spacecraft)|Galileo space probe]].<ref name="McEliece"/>
 
==See also==
*[[Gilbert–Varshamov bound]]
*[[Justesen code]]
*[[Zyablov bound]]
*[[Singleton bound]]
*[[Gilbert–VarshamovZyablov bound]]
 
== References ==
Line 146 ⟶ 150:
 
== Further reading ==
* {{cite book | authorauthor1=Shu Lin, |author2=Daniel J. Costello, Jr. | title=Error Control Coding: Fundamentals and Applications |url=https://archive.org/details/errorcontrolcodi00lins_044 |url-access=limited | publisher=[[Prentice Hall]] | year=1983 | isbn=978-0-13-283796-X5 | pages=278–280[https://archive.org/details/errorcontrolcodi00lins_044/page/n296 278]–280}}
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams | coauthorsauthor2=N.J.A. Sloane | title=The Theory of Error-Correcting Codes | url=https://archive.org/details/theoryoferrorcor0000macw | url-access=registration | publisher=North-Holland | year=1977 | isbn=978-0-444-85193-32 | pages=[https://archive.org/details/theoryoferrorcor0000macw/page/307 307–316] }}
 
== External links ==
* {{scholarpedia|title=Concatenated codes|urlname=Concatenated_codes|curator=[[Dave Forney]]}}
* [https://web.archive.org/web/20110606191907/http://www.cse.buffalo.edu/~atri/courses/coding-theory/fall07.html University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra]
 
{{CCSDS|state=collapsed}}