Turbo code: Difference between revisions

Content deleted Content added
No edit summary
Line 1:
'''Turbo codes''' are a class of recently-developed high-performance [[error-correcting code|error correction codes]] finding use in deep-space [[satellite]] [[communications]] and other applications where designers seek to achieve maximal information transfer over a limited-bandwidth communication link in the presence of data-corrupting noise.
 
=== The Shannon Limit ===
Of all practical error correction methods known to date, turbo codes, together with [[Low-density parity-check code]]s, come closest to approaching the [[Shannon limit]], the theoretical limit of maximum information transfer rate over a noisy channel.
 
Line 9:
Prior to Turbo codes, the best known technique combined a [[Reed-Solomon error correction]] [[block code]] with a [[Viterbi algorithm]] [[convolutional code]], also known as RSV codes. These RSV codes never were able to approach the [[Shannon limit]] as closely as Turbo codes have been able to.
 
=== Who devised Turbo Codes? ===
The method was introduced by [[Claude Berrou|Berrou]], [[Alain Glavieux|Glavieux]], and [[Punya Thitimajshima|Thitimajshima]] in their [[1993]] paper: "''Near Shannon Limit error-correcting coding and decoding: Turbo-codes''" published in the Proceedings of IEEE International Communications Conference [http://www-elec.enst-bretagne.fr/equipe/berrou/Near%20Shannon%20Limit%20Error.pdf]. Turbo code refinements and implementation are an area of active research at a number of universities.
 
===How turbo codes work===
There are two related features of turbo codes that make them different from the more traditional error-correcting codes of the 20th century
* The key insight is the realization that instead of producing a stream of binary digits from the signal it receives, the front-end of the decoder can be designed to produce a likelihood measure for each bit.
* The nitty-gritty of turbo codes is the design of the decoder (and the coder) so that it can exploit this additional information.
 
===The encoder===
The encoder sends three sub-blocks of bits. The first sub-block is the ''m''-bit block of payload data. The second sub-block is ''n/2'' parity bits for the payload data, computed using a [[convolutional code]]. The third sub-block is ''n/2'' parity bits for a known [[permutation]] of the payload data, again computed using a convolutional code. That is, two redundant but different sub-blocks of parity bits for the sent payload. The complete block has ''m+n'' bits of data with a code rate of ''m/(m+n)''.
 
===The decoder===
The decoder front-end produces an integer for each bit in the data stream. This integer is a measure of how likely it is that the bit is a 0 or 1 and is also called ''soft bit''. The integer could be drawn from the range [-127, 127], where:
 
Line 36:
To decode the ''m+n''-bit block of data, the decoder front-end creates a block of likelihood measures, with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for each of the ''n/2''-bit parity sub-blocks. Both decoders use the sub-block of ''m'' likelihoods for the payload data. The decoder working on the second parity sub-block knows the permutation that the coder used for this sub-block.
 
===Solving hypotheses to find bits===
The nitty gritty of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern of ''m'' bits in the payload sub-block. The hypothesis bit-patterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for the ''m''-bit pattern nn of the payload, typically in 15 to 18 cycles.