Turbo code: Difference between revisions

Content deleted Content added
HughSW (talk | contribs)
Added an overview of how turbo codes work
HughSW (talk | contribs)
Line 30:
: For example, for each bit, the front end of a traditional wireless-receiver has to decide if an internal analog voltage is above or below a given threshold voltage level. For a turbo-code decoder, the front end would provide a integer measure of how far the internal voltage is from the given threshold.
 
To decode the ''m+n''-bit block of data, the decoder front-end creates a block of likelihood measures, with one likelihood measure for each bit in each the data stream. There are two parallel decoders, one for each of the ''n/2''-bit parity sub-blocks. Both decoders use the sub-block of ''m'' likelihoods for the payload data. The decoder working on the second parity sub-block knows the permutation that the coder used for this sub-block.
 
'''The nitty gritty''' of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern of ''m'' bits in the payload sub-block. The hypothesis bit-patterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for the ''m''-bit pattern of the payload, typically in 4 to 10 cycles.
 
==External link==