Convolutional code: Difference between revisions

Content deleted Content added
No edit summary
 
(290 intermediate revisions by more than 100 users not shown)
Line 1:
{{Short description|Type of error-correcting code using convolution}}
In [[telecommunication]], a '''convolutional code''' is a type of [[error-correcting code]] in which (a) each <i>m</i>-[[bit]] [[information]] symbol (each <i>m</i>-[[bit string]]) to be encoded is transformed into an <i>n</i>-bit symbol, where <i>m/n</i> is the code <i>rate</i> (<i>n</i> >= <i>m</i>) and (b) the transformation is a function of the last <i>k</i> information symbols, where <i>k</i> is the constraint length of the code.
{{More citations needed|date=May 2015}}
In [[Telecommunications|telecommunication]], a '''convolutional code''' is a type of [[Error correction code|error-correcting code]] that generates parity symbols via the sliding application of a [[Algebraic normal form|boolean polynomial]] function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates [[Trellis (graph)|trellis]] decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.
 
The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the base [[code rate]] and the depth (or memory) of the encoder <math>[n,k,K]</math>. The base code rate is typically given as <math>n/k</math>, where {{mvar|n}} is the raw input data rate and {{mvar|k}} is the data rate of output channel encoded stream. {{mvar|n}} is less than {{mvar|k}} because channel coding inserts redundancy in the input bits. The memory is often called the "constraint length" {{mvar|K}}, where the output is a function of the current input as well as the previous <math>K-1</math> inputs. The depth may also be given as the number of memory elements {{mvar|v}} in the polynomial or the maximum possible number of states of the encoder (typically: <math>2^v</math>).
===Where Convolutional Codes are used===
Convolutional codes are often used to improve the performance of [[radio]] and [[satellite]] links.
 
Convolutional codes are often described as continuous. However, it may also be said that convolutional codes have arbitrary block length, rather than being continuous, since most real-world convolutional encoding is performed on blocks of data. Convolutionally encoded block codes typically employ termination. The arbitrary block length of convolutional codes can also be contrasted to classic [[block code]]s, which generally have fixed block lengths that are determined by algebraic properties.
===Convolutional Encoding===
To convolutionally encode data, start with <i>k</i> memory registers, each holding 1 input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder has <i>n</i> modulo-2 adders, and <i>n</i> [[generator polynomial]]s -- one for each adder (see figure below). An input bit <i>m<sub>1</sub></i> is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs <i>n</i> bits. Now bit shift all register values to the right (<i>m<sub>1</sub></i> moves to <i>m<sub>0</sub></i>, <i>m<sub>0</sub></i> moves to <i>m<sub>-1</sub></i>) and wait for the next input bit. If there are no remaining input bits, the encoder continues output until all registers have returned to the zero state.
 
The code rate of a convolutional code is commonly modified via [[Punctured code|symbol puncturing]]. For example, a convolutional code with a 'mother' code rate <math>n/k=1/2</math> may be punctured to a higher rate of, for example, <math>7/8</math> simply by not transmitting a portion of code symbols. The performance of a punctured convolutional code generally scales well with the amount of parity transmitted. The ability to perform economical soft decision decoding on convolutional codes, as well as the block length and code rate flexibility of convolutional codes, makes them very popular for digital communications.
The figure below is a rate 1/3 (<i>m/n</i>) encoder with constraint length (<i>k</i>) of 3. Generator polynomials are G<sub>1</sub> = (1,1,1), G<sub>2</sub> = (0,1,1), and G<sub>3</sub> = (1,0,1). Therefore, output bits are calculated as follows:
 
== History ==
<i>n<sub>1</sub></i> = mod2 (<i>m<sub>1</sub></i> + <i>m<sub>0</sub></i> + <i>m<sub>-1</sub></i>)<br>
Convolutional codes were introduced in 1955 by [[Peter Elias]]. It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay. In 1967, [[Andrew Viterbi]] determined that convolutional codes could be maximum-likelihood decoded with reasonable complexity using time invariant trellis based decoders — the [[Viterbi algorithm]]. Other trellis-based decoder algorithms were later developed, including the [[BCJR algorithm|BCJR]] decoding algorithm.
<i>n<sub>2</sub></i> = mod2 (<i>m<sub>0</sub></i> + <i>m<sub>-1</sub></i>)<br>
<i>n<sub>3</sub></i> = mod2 (<i>m<sub>1</sub></i> + <i>m<sub>-1</sub></i>)<br>
 
Recursive systematic convolutional codes were invented by [[Claude Berrou]] around 1991. These codes proved especially useful for iterative processing including the processing of concatenated codes such as [[Turbo code|turbo codes]].<ref>Benedetto, Sergio, and Guido Montorsi. "[https://web.archive.org/web/20190406181758/https://ieeexplore.ieee.org/abstract/document/390945/ Role of recursive convolutional codes in turbo codes]." Electronics Letters 31.11 (1995): 858-859.</ref>
[[Image:Convolutional_encoder.png|frame|none|Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3]]
 
Using the "convolutional" terminology, a classic convolutional code might be considered a [[Finite impulse response]] (FIR) filter, while a recursive convolutional code might be considered an [[Infinite impulse response]] (IIR) filter.
===Recursive and non-recursive codes===
 
== Where convolutional codes are used ==
The encoder on the picture above is a ''non-recursive'' encoder. Here's an example of a recursive one:
[[File:GSM convol code.png|thumb|right|400px|Stages of channel coding in GSM.<ref>Eberspächer J. et al. GSM-architecture, protocols and services. John Wiley & Sons, 2008. p.97</ref> Block encoder and Parity check – error detection part. Convolutional encoder and Viterbi decoder – error correction part. [[Error correction code#Interleaving|Interleaving]] and Deinterleaving – code words separation increasing in time ___domain and to avoid bursty distortions.]]
 
Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as [[digital video]], radio, [[Mobile telephony|mobile communications]] (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)<ref>3rd Generation Partnership Project (September 2012). "3GGP TS45.001: Technical Specification Group GSM/EDGE Radio Access Network; Physical layer on the radio path; General description". Retrieved 2013-07-20.</ref><ref>Halonen, Timo, Javier Romero, and Juan Melero, eds. GSM, GPRS and EDGE performance: evolution towards 3G/UMTS. John Wiley & Sons, 2004. p. 430</ref>) and [[Communications satellite|satellite communications]].<ref>Butman, S. A., L. J. Deutsch, and R. L. Miller. [http://tda.jpl.nasa.gov/progress_report/42-63/63H.PDF "Performance of concatenated codes for deep space missions."] The Telecommunications and Data Acquisition Progress Report 42-63, March–April 1981 (1981): 33-39.</ref> These codes are often implemented in [[Concatenated error correction code|concatenation]] with a hard-decision code, particularly [[Reed–Solomon error correction|Reed–Solomon]]. Prior to [[Turbo code|turbo codes]] such constructions were the most efficient, coming closest to the [[Shannon–Hartley theorem|Shannon limit]].
[[Image:Convolutional_encoder_recursive.png|left|thumb|340px|none|Img.2. Rate 1/2 recursive, systematic convolutional encoder with constraint length 4]]
 
== Convolutional encoding ==
One can see that a data sequence being encoded is here a part of the output (encoded) sequence too (look at the output 2). Such codes are referred to as ''systematic'' and others are called ''non-systematic''.
To convolutionally encode data, start with ''k'' [[Processor register|memory registers]], each holding one input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder has ''n'' modulo-2 [[Adder (electronics)|adder]]s (a modulo 2 adder can be implemented with a single [[Boolean algebra|Boolean]] [[XOR gate]], where the logic is: {{math|1=0+0&nbsp;=&nbsp;0}}, {{math|1=0+1&nbsp;=&nbsp;1}}, {{math|1=1+0&nbsp;=&nbsp;1}}, {{math|1=1+1&nbsp;=&nbsp;0}}), and ''n'' [[Polynomial code|generator polynomials]] &mdash; one for each adder (see figure below). An input bit ''m''<sub>1</sub> is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs ''n'' symbols. These symbols may be transmitted or punctured depending on the desired code rate. Now [[Bitwise operation#Bit shifts|bit shift]] all register values to the right (''m''<sub>1</sub> moves to ''m''<sub>0</sub>, ''m''<sub>0</sub> moves to ''m''<sub>−1</sub>) and wait for the next input bit. If there are no remaining input bits, the encoder continues shifting until all registers have returned to the zero state (flush bit termination).
[[File:Convolutional encoder non-recursive.png|frame|Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3]]
The figure below is a rate {{1/3}} ({{frac|''m''|''n''}}) encoder with constraint length (''k'') of 3. Generator polynomials are {{math|1=''G''<sub>1</sub> = (1,1,1),}} {{math|1=''G''<sub>2</sub> = (0,1,1)}}, and {{math|1=''G''<sub>3</sub> = (1,0,1)}}. Therefore, output bits are calculated (modulo 2) as follows:
:''n''<sub>1</sub> = ''m''<sub>1</sub> + ''m''<sub>0</sub> + ''m''<sub>−1</sub>
:''n''<sub>2</sub> = ''m''<sub>0</sub> + ''m''<sub>−1</sub>
:''n''<sub>3</sub> = ''m''<sub>1</sub> + ''m''<sub>−1</sub>.
 
RecursiveConvolutional codes arecan almost alwaysbe systematic and, conversely, non-recursive codes are non-systematic. It isn't a strict requirement, but a common practise.:
 
* systematic repeats the structure of the message before encoding
===Impulse response, transfer function and constraint length===
* non-systematic changes the initial structure
 
Non-systematic convolutional codes are more popular due to better noise immunity. It relates to the free distance of the convolutional code.<ref>Moon, Todd K. "Error correction coding." Mathematical Methods and Algorithms. Jhon Wiley and Son (2005). p. 508</ref>
A convolutional encoder is called so because it performs a ''convolution'' of the input stream with encoder's ''impulse responses'':
 
<gallery heights="150">
<center><math>y_i^j=\sum_{k=0}^{+\infty} h^j_k x_{i-k},</math></center>
File:Non-systematic convolutional code.png|A short illustration of non-systematic convolutional code.
where <math>x</math> is an input sequence, <math>y^j</math> is a sequence from output <math>j</math> and <math>h^j</math> is an impulse response for output <math>j</math>.
File:Systematic convolutional code.png|A short illustration of systematic convolutional code.
</gallery>
 
== Recursive and non-recursive codes ==
A convolutional encoder is a discrete [[LTI system|linear time-invariant system]]. Every output of an encoder can be described by its own [[transfer function]], which is closely related to a generator polynomial. An impulse response is connected with a transfer function through [[Z-transform]].
The encoder on the picture above is a ''non-recursive'' encoder. Here's an example of a recursive one and as such it admits a feedback structure:
 
[[Image:Convolutional encoder recursive.svg|thumb|340px|none|Img.2. Rate 1/2 8-state recursive systematic convolutional encoder. Used as constituent code in 3GPP 25.212 Turbo Code.]]
 
The example encoder is ''[[systematic code|systematic]]'' because the input data is also used in the output symbols (Output 2). Codes with output symbols that do not include the input data are called ''non-systematic.''
 
Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice.
 
The example encoder in Img. 2. is an 8-state encoder because the 3 registers will create 8 possible encoder states (2<sup>3</sup>). A corresponding decoder trellis will typically use 8 states as well.
 
Recursive systematic convolutional (RSC) codes have become more popular due to their use in Turbo Codes. Recursive systematic codes are also referred to as pseudo-systematic codes.
 
Other RSC codes and example applications include:
 
[[File:Two-state_RSC_code.png|thumb|340px|none|Img. 3. Two-state recursive systematic convolutional (RSC) code. Also called an 'accumulator.']]
 
Useful for [[Low-density parity-check code|LDPC]] code implementation and as inner constituent code for [[serial concatenated convolutional codes]] (SCCC's).
 
[[File:Four-state_RSC_code.png|thumb|340px|none|Img. 4. Four-state recursive systematic convolutional (RSC) code.]]
 
Useful for SCCC's and multidimensional turbo codes.
 
[[File:Sixteen-state_RSC_code.png|thumb|340px|none|Img. 5. Sixteen-state recursive systematic convolutional (RSC) code.]]
 
Useful as constituent code in low error rate turbo codes for applications such as satellite links. Also suitable as SCCC outer code.
 
== Impulse response, transfer function, and constraint length ==
A convolutional encoder is called so because it performs a ''[[convolution]]'' of the input stream with the encoder's ''impulse responses'':
 
:<math>y_i^j=\sum_{k=0}^{\infty} h^j_k x_{i-k} = (x * h^j)[i],</math>
 
where {{mvar|x}} is an input sequence, {{mvar|y<sup>j</sup>}} is a sequence from output {{mvar|j}}, {{mvar|h<sup>j</sup>}} is an impulse response for output {{mvar|j}} and <math>{*}</math> denotes convolution.
 
A convolutional encoder is a discrete [[LTI system|linear time-invariant system]]. Every output of an encoder can be described by its own [[transfer function]], which is closely related to the generator polynomial. An impulse response is connected with a transfer function through [[Z-transform]].
 
Transfer functions for the first (non-recursive) encoder are:
* <math>H_1(z)=1+z^{-1}+z^{-2},\,</math>
* <math>H_2(z)=z^{-1}+z^{-2},\,</math>
* <math>H_3(z)=1+z^{-2}.\,</math>
 
Transfer functions for the second (recursive) encoder are:
* <math>H_1(z)=\frac{1+z^{-1}+z^{-3}}{1+-z^{-2}+-z^{-3}},\,</math>
* <math>H_2(z)=1.\,</math>
 
Define {{mvar|m}} by
Given that maximum polynom's power (in the transfer functions) is <math>m</math>, ''constraint length'' can be defined as <math>K=m+1</math>.
: <math> m = \max_i \operatorname{polydeg} (H_i(1/z)) \,</math>
 
where, for any [[rational function]] <math>f(z) = P(z)/Q(z) \,</math>,
===Trellis diagram===
: <math> \operatorname{polydeg}(f) = \max (\deg(P), \deg(Q)) \,</math>.
 
Then {{mvar|m}} is the maximum of the [[degree of a polynomial|polynomial degrees]] of the
A convolutional encoder is a [[finite state machine]]. An encoder with <math>n</math> binary cells will have <math>2^n</math> states.
 
<math> H_i(1/z) \,</math>, and the ''constraint length'' is defined as <math> K = m + 1 \,</math>. For instance, in the first example the constraint length is 3, and in the second the constraint length is 4.
Imagine that the encoder (shown on Img.1) has '1' in the left memory cell (<math>m_0</math>), and '0' in the right one (<math>m_{-1}</math>). (<math>m_1</math> is not really a memory cell because it represents a current value). We will designate such a state as "10". According to an input bit the encoder at the next turn can convert either to "01" state or in "11" state. One can see that not all transitions are possible (e.g., a decoder can't convert from "10" state to "00" or even stay in "10" state).
 
== Trellis diagram ==
{{See also|Trellis (graph)|Trellis coded modulation}}
 
A convolutional encoder is a [[Finite-state machine|finite state machine]]. An encoder with ''n'' binary cells will have 2<sup>''n''</sup> states.
 
Imagine that the encoder (shown on Img.1, above) has '1' in the left memory cell (''m''<sub>0</sub>), and '0' in the right one (''m''<sub>−1</sub>). (''m''<sub>1</sub> is not really a memory cell because it represents a current value). We will designate such a state as "10". According to an input bit the encoder at the next turn can convert either to the "01" state or the "11" state. One can see that not all transitions are possible for (e.g., a decoder can't convert from "10" state to "00" or even stay in "10" state).
 
All possible transitions can be shown as below:
 
[[Image:Convolutional_code_trellis_diagramConvolutional code trellis diagram.pngsvg|left|thumb|340px|none|Img.26. A trellis diagram for the encoder on Img.1. A path through the trellis is shown as a red line. The solid lines indicate transitions where a "0" is input and the dashed lines where a "1" is input.]]
 
An actual encoded sequence can be represented as a path on this graph. One valid path is shown in red as an example.
Line 59 ⟶ 111:
This diagram gives us an idea about ''decoding'': if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearest ''correct'' (fitting the graph) sequence. The real decoding algorithms exploit this idea.
 
=== Free distance and error distribution= ==
[[File:Recnonrecber.png|thumb|right|300px|Theoretical bit-error rate curves of encoded QPSK (recursive and non-recursive, soft decision), additive white Gaussian noise channel. Curves are small distinguished due to approximately the same free distances and weights.]]
 
AThe '''free distance'''<ref>Moon, Todd K. "[https://leseprobe.buch.de/images-adb/7b/4f/7b4f94db-7c55-4836-9b61-2ff98cb242d9.pdf Error correction coding]." Mathematical Methods and Algorithms. Jhon Wiley and Son (<math>d2005).- p.508</mathref> (''d'') is athe minimal [[Hamming distance]] between different encoded sequences. AThe ''correcting capability'' (<math>''t</math>'') of a convolutional code is athe number of errors that can be corrected by the code. It can be calculated as
<center><math>t=\left \lfloor \frac{d-1}{2} \right \rfloor</math></center>
 
:<math>t=\left \lfloor \frac{d-1}{2} \right \rfloor.</math>
Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of <math>t</math> applies to a quantity of errors located relatively near to each other. That is, multiple groups of <math>t</math> errors can usually be fixed when they are relatively far.
 
Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of ''t'' applies to a quantity of errors located relatively near to each other. That is, multiple groups of ''t'' errors can usually be fixed when they are relatively far apart.
Free distance can be interpreted as a minimal length of a erroneous "burst" at the output of a convolutional decoder. The fact that errors appears as "bursts" should be accounted for when designing a [[concatenated code]] with an inner convolutional code. The popular solution for this problem is to [[interleaver|interleave]] data before convolutional encoding, so that outer block (usually [[Reed-Solomon]]) code can correct most of the errors.
 
Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder. The fact that errors appear as "bursts" should be accounted for when designing a [[Concatenated error correction code|concatenated code]] with an inner convolutional code. The popular solution for this problem is to [[Error correction code#Interleaving|interleave]] data before convolutional encoding, so that the outer block (usually [[Reed–Solomon error correction|Reed–Solomon]]) code can correct most of the errors.
===Decoding Convolutional Codes===
Several [[algorithm]]s exist for decoding convolutional codes. For relatively small values of <i>k</i>, the [[Viterbi algorithm]] is universally used as it provides [[maximum likelihood]] performance and is highly parallelizable. Viterbi decoders are thus easy to implement in [[VLSI]] hardware and in software on CPUs with [[SIMD]] instruction sets.
 
== Decoding convolutional codes ==
===Popular Convolutional Codes===
{{See also|Viterbi algorithm}}
An especially popular Viterbi-decoded convolutional code, used at least since the [[Voyager program]] has a constraint length <i>k</i> of 7 and a rate <i>r</i> of 1/2.
 
[[File:Convolutional codes PSK QAM LLR.svg|thumb|right|300px| Bit error ratio curves for convolutional codes with different options of digital modulations ([[Phase-shift keying|QPSK, 8-PSK]], [[Quadrature amplitude modulation|16-QAM, 64-QAM]]) and [[Likelihood function#Log-likelihood|LLR]] Algorithms.<ref>[https://www.mathworks.com/help/comm/examples/llr-vs-hard-decision-demodulation.html LLR vs. Hard Decision Demodulation (MathWorks)]</ref><ref>[https://www.mathworks.com/help/comm/ug/estimate-ber-for-hard-and-soft-decision-viterbi-decoding.html Estimate BER for Hard and Soft Decision Viterbi Decoding (MathWorks)]</ref> (Exact<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6yjx Digital modulation: Exact LLR Algorithm (MathWorks)]</ref> and Approximate<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6ymu Digital modulation: Approximate LLR Algorithm (MathWorks)]</ref>) over additive white Gaussian noise channel.]]
* Longer constraint lengths produce more powerful codes, but the [[complexity]] of the Viterbi algorithm [[exponential growth|increases exponentially]] with constraint lengths, limiting these more powerful codes to deep space missions where the extra performance is easily worth the increased decoder complexity.
 
Several [[algorithm]]s exist for decoding convolutional codes. For relatively small values of ''k'', the [[Viterbi algorithm]] is universally used as it provides [[Maximum likelihood estimation|maximum likelihood]] performance and is highly parallelizable. Viterbi decoders are thus easy to implement in [[Very-large-scale integration|VLSI]] hardware and in software on CPUs with [[Single instruction, multiple data|SIMD]] instruction sets.
* [[Mars Pathfinder]], [[Mars Exploration Rover]] and the [[Cassini probe]] to Saturn use a <i>k</i> of 15 and a rate of 1/6; this code performs about 2 dB better than the simpler <i>k</i>=7 code at a cost of 256x in decoding complexity (compared to Voyager mission codes).
 
Longer constraint length codes are more practically decoded with any of several [[sequential decoding]] algorithms, of which the [[Robert Fano|Fano]] algorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in the [[Pioneer program]] of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with large [[Reed–Solomon error correction]] codes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates.
===Perforated Convolutional Codes===
 
Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of the [[Viterbi algorithm#Soft output Viterbi algorithm|Soft output Viterbi algorithm]]. [[Maximum a posteriori estimation|Maximum a posteriori]] (MAP) soft decisions for each bit can be obtained by use of the [[BCJR algorithm]].
A ''perforation'' is a technique used to make a ''m/n'' rate code from a "basic" ''1/2'' code. It is reached by deletion of some bits in the decoder output. Bits are deleted according to ''perforation matrix''. The following perforation matrixes are the most frequently used:
 
== Popular convolutional codes ==
{|border=1
[[File:Conv code 177 133.png|400px|left|thumb|Shift-register for the (7, [171, 133]) convolutional code polynomial. Branches: <math>h^1 = 171_o = [1111001]_b</math>, <math>h^2 = 133_o = [1011011]_b</math>. All of the math operations should be done by modulo 2.]]
| code rate
[[File:Lenss.png|thumb|right|300px|Theoretical bit-error rate curves of encoded QPSK (soft decision), additive white Gaussian noise channel. Longer constraint lengths produce more powerful codes, but the [[complexity]] of the Viterbi algorithm [[exponential growth|increases exponentially]] with constraint lengths, limiting these more powerful codes to deep space missions where the extra performance is easily worth the increased decoder complexity.]]
| perforation matrix
 
| free distance (for NASA standard K=7 convolutional code)
In fact, predefined convolutional codes structures obtained during scientific researches are used in the industry. This relates to the possibility to select catastrophic convolutional codes (causes larger number of errors).
 
An especially popular Viterbi-decoded convolutional code, used at least since the [[Voyager program]], has a constraint length {{mvar|K}} of 7 and a rate ''r'' of 1/2.<ref>Butman, S. A., L. J. Deutsch, and R. L. Miller. [https://ipnpr.jpl.nasa.gov/progress_report/42-63/63H.PDF "Performance of concatenated codes for deep space missions." ] The Telecommunications and Data Acquisition Progress Report 42-63, March–April 1981 (1981): 33-39.</ref>
 
[[Mars Pathfinder]], [[Mars Exploration Rover]] and the [[Cassini–Huygens|Cassini probe]] to Saturn use a {{mvar|K}} of 15 and a rate of 1/6; this code performs about 2&nbsp;dB better than the simpler <math>K=7</math> code at a cost of 256× in decoding complexity (compared to Voyager mission codes).
 
The convolutional code with a constraint length of 2 and a rate of 1/2 is used in [[GSM]] as an error correction technique.<ref>[http://www.scholarpedia.org/article/Global_system_for_mobile_communications_(GSM) Global system for mobile communications (GSM)]</ref>
 
== Punctured convolutional codes ==
{{See also|Punctured code}}
 
[[File:Soft34.png|thumb|right|300px|Convolutional codes with 1/2 and 3/4 code rates (and constraint length 7, Soft decision, 4-QAM / QPSK / OQPSK).<ref>[https://ch.mathworks.com/help/comm/ug/punctured-convolutional-coding-1.html Punctured Convolutional Coding (MathWorks)]</ref>]]
 
Convolutional code with any code rate can be designed based on polynomial selection;<ref>{{Cite web|url=https://www.mathworks.com/help/comm/ref/poly2trellis.html|title=Convert convolutional code polynomials to trellis description – MATLAB poly2trellis}}</ref> however, in practice, a puncturing procedure is often used to achieve the required code rate. [[Punctured code|Puncturing]] is a technique used to make a ''m''/''n'' rate code from a "basic" low-rate (e.g., 1/''n'') code. It is achieved by deleting of some bits in the encoder output. Bits are deleted according to a ''puncturing matrix''. The following puncturing matrices are the most frequently used:
 
{| class="wikitable" |cellpadding="2"
! Code rate
! Puncturing matrix
! width="128px" | Free distance (for NASA standard K=7 convolutional code)
|-
| 1/2<br />(No perf.)
|
{| border="0"
|-
| 1
|-
| 1
|}
| 10
|-
| 2/3
|
| <table border=0><tr><td>1<td>0<tr><td>1<td>1</table>
{| border="0"
|-
| 1
| 0
|-
| 1
| 1
|}
| 6
|-
| 3/4
|
| <table border=0><tr><td>1<td>0<td>1<tr><td>1<td>1<td>0</table>
{| border="0"
|-
| 1
| 0
| 1
|-
| 1
| 1
| 0
|}
| 5
|-
| 5/6
|
| <table border=0><tr><td>1<td>0<td>1<td>0<td>1<tr><td>1<td>1<td>0<td>1<td>0</table>
{| border="0"
|-
| 1
| 0
| 1
| 0
| 1
|-
| 1
| 1
| 0
| 1
| 0
|}
| 4
|-
| 7/8
|
| <table border=0><tr><td>1<td>0<td>0<td>0<td>1<td>0<td>1<tr><td>1<td>1<td>1<td>1<td>0<td>1<td>0</table>
{| border="0"
|-
| 1
| 0
| 0
| 0
| 1
| 0
| 1
|-
| 1
| 1
| 1
| 1
| 0
| 1
| 0
|}
| 3
|}
</table>
 
For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every secondfirst bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard.
 
The perforatedPunctured convolutional codes are widely used in the [[Communications satellite|satellite communications]], for example, in [[INTELSATIntelsat]] systems and [[DVB|Digital videoVideo broadcastingBroadcasting]].
 
PerforatedPunctured convolutional codes are also called "puncturedperforated".
 
== Turbo codes: replacing convolutional codes ==
===Design Issues===
{{Main article|Turbo code}}
Longer constraint length codes are more practically decoded with any of several <b>sequential</b> decoding algorithms, of which the [[Shannon-Fano_coding|Fano]] algorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in the [[Pioneer program]] of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with large [[Reed-Solomon error correction]] codes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates.
 
[[File:Turbo codes scheme.png|thumb|350 px| A turbo code with component codes 13, 15.<ref>[http://www.scholarpedia.org/article/Turbo_codes Turbo code]</ref> Turbo codes get their name because the decoder uses feedback, like a turbo engine. Permutation means the same as the interleaving. C1 and C2 are recursive convolutional codes. Recursive and non-recursive convolutional codes are not so much different in BER performance, however, recursive type of is implemented in Turbo convolutional codes due to better interleaving properties.<ref>Benedetto, Sergio, and Guido Montorsi. "[https://web.archive.org/web/20190406181758/https://ieeexplore.ieee.org/abstract/document/390945/ Role of recursive convolutional codes in turbo codes]." Electronics Letters 31.11 (1995): 858-859.</ref>]]
===Turbo Codes: replacing Convolutional Codes===
Simple Viterbi-decoded convolutional codes are now giving way to [[turbo code]]s, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed by [[Shannon's theorem]] with much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance.
Turbo codes have not yet been concatenated with solid (low complexity) [[Reed-Solomon error correction]] codes. However, in the interest of planetary exploration this may someday be done.
 
Simple Viterbi-decoded convolutional codes are now giving way to [[turbo code]]s, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed by [[Noisy-channel coding theorem|Shannon's theorem]] with much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance. [[Concatenated error correction code|Concatenation]] with an outer algebraic code (e.g., [[Reed–Solomon error correction|Reed–Solomon]]) addresses the issue of [[error floor]]s inherent to turbo code designs.
==External Links==
*[http://complextoreal.com/chapters/convo.pdf Tutorial on Convolutional Coding and Decoding]
 
== See also ==
[[Category:Error detection and correction]]
* [[Quantum convolutional code]]
 
== References ==
[[de:Faltungscode]]
* {{FS1037C}}
[[pl:Kod splotowy]]
{{reflist}}
 
== External links ==
* [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]], discusses convolutional codes in Chapter 48.
* [http://www.eccpage.com/ The Error Correcting Codes (ECC) Page]
*[https://web.archive.org/web/20140522015414/http://www.mathworks.fr/fr/help/comm/convolutional-coding.html Matlab explanations]
* [http://www.ni.com/white-paper/14917/en/ Fundamentals of Convolutional Decoders for Better Digital Communications]
* [http://web.mit.edu/6.02/www/s2009/handouts/labs/lab5.shtml Convolutional codes (MIT)]
* [http://www5.tu-ilmenau.de/nt/de/teachings/vorlesungen/itsc_master/folien/script.pdf Information Theory and Coding (TU Ilmenau)] {{Webarchive|url=https://web.archive.org/web/20170830033846/http://www5.tu-ilmenau.de/nt/de/teachings/vorlesungen/itsc_master/folien/script.pdf |date=2017-08-30 }}, discusses convolutional codes on page 48.
 
== Further reading ==
 
=== Publications ===
{{refbegin}}
* Francis, Michael. "Viterbi Decoder Block Decoding-Trellis Termination and Tail Biting." Xilinx XAPP551 v2. 0, DD (2005): 1-21.
* Chen, Qingchun, Wai Ho Mow, and Pingzhi Fan. "Some new results on recursive convolutional codes and their applications." Information Theory Workshop, 2006. ITW'06 Chengdu. IEEE. IEEE, 2006.
* Fiebig, U-C., and Patrick Robertson. "Soft-decision and erasure decoding in fast frequency-hopping systems with convolutional, turbo, and Reed-Solomon codes." IEEE Transactions on Communications 47.11 (1999): 1646-1654.
* Bhaskar, Vidhyacharan, and Laurie L. Joiner. "Performance of punctured convolutional codes in asynchronous CDMA communications under perfect phase-tracking conditions." Computers & Electrical Engineering 30.8 (2004): 573-592.
* Modestino, J., and Shou Mui. "Convolutional code performance in the Rician fading channel." IEEE Transactions on Communications 24.6 (1976): 592-606.
* Chen, Yuh-Long, and Che-Ho Wei. "Performance evaluation of convolutional codes with MPSK on Rician fading channels." IEE Proceedings F-Communications, Radar and Signal Processing. Vol. 134. No. 2. IET, 1987.
{{refend}}
 
[[Category:Error detection and correction]]