Turbo code: Difference between revisions

Content deleted Content added
link de
Line 16:
* The nitty-gritty of turbo codes is the design of the decoder (and the coder) so that it can exploit this additional information.
 
===The encoder===
'''TheThen coder'''encoder sends three sub-blocks of bits. The first sub-block is the ''m''-bit block of payload data. The second sub-block is ''n/2'' parity bits for the payload data, computed using a [[convolutional code]]. The third sub-block is ''n/2'' parity bits for a known [[permutation]] of the payload data, again computed using a convolutional code. That is, two redundant but different sub-blocks of parity bits for the payload are sent. The complete block has ''m+n'' bits of data with a code rate of ''m/n''.
 
===The decoder===
'''The decoder''' front-end produces an integer for each bit in the data stream. This integer is a measure of how likely it is that the bit is a 0 or 1. The integer could be drawn from the range [-127, 127], where:
 
* -127 means "certainly 0"
Line 33 ⟶ 35:
To decode the ''m+n''-bit block of data, the decoder front-end creates a block of likelihood measures, with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for each of the ''n/2''-bit parity sub-blocks. Both decoders use the sub-block of ''m'' likelihoods for the payload data. The decoder working on the second parity sub-block knows the permutation that the coder used for this sub-block.
 
===solving hypothises to derive bits===
'''The nitty gritty''' of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern of ''m'' bits in the payload sub-block. The hypothesis bit-patterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for the ''m''-bit pattern of the payload, typically in 4 to 10 cycles.
 
==External link==