Concatenated error correction code: Difference between revisions

Content deleted Content added
m partial cleanup of mess
Line 1:
In [[coding theory]], '''concatenated codes''' form a class of [[error-correcting code]]s that are derived by combining an '''inner code''' and an '''outer code'''. They were conceived in 1966 by [[Dave Forney]] as a solution for the problem of finding a code that has both exponentially decreasing error probability with increasing block length and [[polynomial-time]] decoding [[Computational_complexity_theory|complexity]].<ref Concatenated codes became widely used in space communications in the 1970s. The idea of concatenated codes is that we can create a new code by composing an existing code with another code, that is over the desired alphabet.name="Forney">
 
Consider the output of the first code (outer code) applied to the message is <math>(\sigma_{1}...\sigma_{n-1})</math> Now each <math>\sigma_{i}</math> can be interpreted as a string in smaller alphabet.Then we can use the second code(inner code) to encode each <math>\sigma_{i}</math> using a smaller alphabet. The final codeword is the concatenation of all these inner codewords.
 
[[Computational_complexity_theory|complexity]].<ref name="Forney">
{{cite journal
|author=[[Dave Forney|G. D. Forney]]
Line 12 ⟶ 8:
}}
</ref>
Concatenated codes became widely used in space communications in the 1970s.
 
The idea of concatenated codes is that we can create a new code by composing an existing code with another code, that is over the desired alphabet. Consider the output of the first code (outer code) applied to be the message is <math>(\sigma_{1}...\sigma_{n-1})</math>. Now each <math>\sigma_{i}</math> can be interpreted as a string in smaller alphabet. Then we can use the second code (inner code) can be used to encode each <math>\sigma_{i}</math> usingover a smaller alphabet. The final codeword is the concatenation of all these inner codewords.
 
==Origin==
The field of [[channel coding]] is concerned with sending a stream of data at as high a rate as possible over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology.
 
[[Noisy-channel_coding_theorem|Shannon's channel coding theorem]] shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates <math>R</math> less than a certain threshold <math>C</math>, called the [[channel capacity]] of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length <math>N</math> of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with <math>N</math>, so such an optimum decoder rapidly becomes infeasible.
 
In his [http://mitpress.mit.edu/catalog/item/default.asp?tid=5813&ttype=2 doctoral thesis], [[Dave Forney]] showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length.