Content deleted Content added
m →Trellis diagram: main page; wl direct |
No edit summary |
||
(4 intermediate revisions by 4 users not shown) | |||
Line 1:
{{Short description|Type of error-correcting code using convolution}}
{{
In [[Telecommunications|telecommunication]], a '''convolutional code''' is a type of [[
The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the base [[code rate]] and the depth (or memory) of the encoder <math>[n,k,K]</math>.
Convolutional codes are often described as continuous.
The code rate of a convolutional code is commonly modified via [[Punctured code|symbol puncturing]].
== History ==
Convolutional codes were introduced in 1955 by [[Peter Elias]].
Recursive systematic convolutional codes were invented by [[Claude Berrou]] around 1991.
▲Convolutional codes were introduced in 1955 by [[Peter Elias]]. It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay. In 1967, [[Andrew Viterbi]] determined that convolutional codes could be maximum-likelihood decoded with reasonable complexity using time invariant trellis based decoders — the [[Viterbi algorithm]]. Other trellis-based decoder algorithms were later developed, including the [[BCJR]] decoding algorithm.
▲Recursive systematic convolutional codes were invented by [[Claude Berrou]] around 1991. These codes proved especially useful for iterative processing including the processing of concatenated codes such as [[turbo codes]].<ref>Benedetto, Sergio, and Guido Montorsi. "[https://ieeexplore.ieee.org/abstract/document/390945/ Role of recursive convolutional codes in turbo codes]." Electronics Letters 31.11 (1995): 858-859.</ref>
Using the "convolutional" terminology, a classic convolutional code might be considered a [[Finite impulse response]] (FIR) filter, while a recursive convolutional code might be considered an [[Infinite impulse response]] (IIR) filter.
== Where convolutional codes are used ==
[[File:GSM convol code.png|thumb|right|400px|Stages of channel coding in GSM.<ref>Eberspächer J. et al. GSM-architecture, protocols and services. John Wiley & Sons, 2008. p.97</ref> Block encoder and Parity check – error detection part. Convolutional encoder and Viterbi decoder – error correction part. [[Error correction code#Interleaving|Interleaving]] and Deinterleaving – code words separation increasing in time ___domain and to avoid bursty distortions.]]
Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as [[digital video]], radio, [[Mobile telephony|mobile communications]] (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)<ref>3rd Generation Partnership Project (September 2012). "3GGP TS45.001: Technical Specification Group GSM/EDGE Radio Access Network; Physical layer on the radio path; General description". Retrieved 2013-07-20.</ref><ref>Halonen, Timo, Javier Romero, and Juan Melero, eds. GSM, GPRS and EDGE performance: evolution towards 3G/UMTS. John Wiley & Sons, 2004. p. 430</ref>) and [[Communications satellite|satellite communications]].<ref>Butman, S. A., L. J. Deutsch, and R. L. Miller. [http://tda.jpl.nasa.gov/progress_report/42-63/63H.PDF "Performance of concatenated codes for deep space missions."] The Telecommunications and Data Acquisition Progress Report 42-63, March–April 1981 (1981): 33-39.</ref> These codes are often implemented in [[
==Convolutional encoding==▼
▲== Convolutional encoding ==
To convolutionally encode data, start with ''k'' [[
[[File:Convolutional encoder non-recursive.png|frame|Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3]]
The figure below is a rate {{1/3}} ({{frac|''m''|''n''}}) encoder with constraint length (''k'') of 3.
:''n''<sub>1</sub> = ''m''<sub>1</sub> + ''m''<sub>0</sub> + ''m''<sub>−1</sub>
:''n''<sub>2</sub> = ''m''<sub>0</sub> + ''m''<sub>−1</sub>
Line 44 ⟶ 41:
</gallery>
== Recursive and non-recursive codes ==
The encoder on the picture above is a ''non-recursive'' encoder. Here's an example of a recursive one and as such it admits a feedback structure:
[[Image:Convolutional encoder recursive.svg|thumb|340px|none|Img.2. Rate 1/2 8-state recursive systematic convolutional encoder.
The example encoder is ''[[systematic code|systematic]]'' because the input data is also used in the output symbols (Output 2). Codes with output symbols that do not include the input data are called ''non-systematic.''
Line 54 ⟶ 50:
Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice.
The example encoder in Img. 2. is an 8-state encoder because the 3 registers will create 8 possible encoder states (2<sup>3</sup>).
Recursive systematic convolutional (RSC) codes have become more popular due to their use in Turbo Codes. Recursive systematic codes are also referred to as pseudo-systematic codes.
Line 72 ⟶ 68:
Useful as constituent code in low error rate turbo codes for applications such as satellite links. Also suitable as SCCC outer code.
== Impulse response, transfer function, and constraint length ==
A convolutional encoder is called so because it performs a ''[[convolution]]'' of the input stream with the encoder's ''impulse responses'':
Line 97 ⟶ 92:
: <math> \operatorname{polydeg}(f) = \max (\deg(P), \deg(Q)) \,</math>.
Then {{mvar|m}} is the maximum of the [[degree of a polynomial|polynomial degrees]] of the
<math> H_i(1/z) \,</math>, and the ''constraint length'' is defined as <math> K = m + 1 \,</math>. For instance, in the first example the constraint length is 3, and in the second the constraint length is 4.
== Trellis diagram ==
{{
A convolutional encoder is a [[Finite-state machine|finite state machine]]. An encoder with ''n'' binary cells will have 2<sup>''n''</sup> states.
Line 115 ⟶ 111:
This diagram gives us an idea about ''decoding'': if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearest ''correct'' (fitting the graph) sequence. The real decoding algorithms exploit this idea.
== Free distance and error distribution ==
[[File:Recnonrecber.png|thumb|right|300px|Theoretical bit-error rate curves of encoded QPSK (recursive and non-recursive, soft decision), additive white Gaussian noise channel. Curves are small distinguished due to approximately the same free distances and weights.]]
Line 124 ⟶ 120:
Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of ''t'' applies to a quantity of errors located relatively near to each other. That is, multiple groups of ''t'' errors can usually be fixed when they are relatively far apart.
Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder. The fact that errors appear as "bursts" should be accounted for when designing a [[Concatenated error correction code|concatenated code]] with an inner convolutional code. The popular solution for this problem is to [[
== Decoding convolutional codes ==
{{See also|Viterbi algorithm}}
[[File:Convolutional codes PSK QAM LLR.svg|thumb|right|300px| Bit error ratio curves for convolutional codes with different options of digital modulations ([[Phase-shift keying|QPSK, 8-PSK]], [[Quadrature amplitude modulation|16-QAM, 64-QAM]]) and [[Likelihood function#Log-likelihood|LLR]] Algorithms.<ref>[https://www.mathworks.com/help/comm/examples/llr-vs-hard-decision-demodulation.html LLR vs. Hard Decision Demodulation (MathWorks)]</ref><ref>[https://www.mathworks.com/help/comm/ug/estimate-ber-for-hard-and-soft-decision-viterbi-decoding.html Estimate BER for Hard and Soft Decision Viterbi Decoding (MathWorks)]</ref> (Exact<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6yjx Digital modulation: Exact LLR Algorithm (MathWorks)]</ref> and Approximate<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6ymu Digital modulation: Approximate LLR Algorithm (MathWorks)]</ref>) over additive white Gaussian noise channel.]]
Several [[algorithm]]s exist for decoding convolutional codes. For relatively small values of ''k'', the [[Viterbi algorithm]] is universally used as it provides [[Maximum likelihood estimation|maximum likelihood]] performance and is highly parallelizable. Viterbi decoders are thus easy to implement in [[Very-large-scale integration|VLSI]] hardware and in software on CPUs with [[Single instruction, multiple data|SIMD]] instruction sets.
Longer constraint length codes are more practically decoded with any of several [[sequential decoding]] algorithms, of which the [[Robert Fano|Fano]] algorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in the [[Pioneer program]] of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with large [[Reed–Solomon error correction]] codes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates.
Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of the [[Viterbi algorithm#Soft output Viterbi algorithm|Soft output Viterbi algorithm]]. [[Maximum a posteriori estimation|Maximum a posteriori]] (MAP) soft decisions for each bit can be obtained by use of the [[BCJR algorithm]].
== Popular convolutional codes ==
[[File:Conv code 177 133.png|400px|left|thumb|Shift-register for the (7, [171, 133]) convolutional code polynomial. Branches: <math>h^1 = 171_o = [1111001]_b</math>, <math>h^2 = 133_o = [1011011]_b</math>. All of the math operations should be done by modulo 2.]]
[[File:Lenss.png|thumb|right|300px|Theoretical bit-error rate curves of encoded QPSK (soft decision), additive white Gaussian noise channel. Longer constraint lengths produce more powerful codes, but the [[complexity]] of the Viterbi algorithm [[exponential growth|increases exponentially]] with constraint lengths, limiting these more powerful codes to deep space missions where the extra performance is easily worth the increased decoder complexity.]]
Line 145 ⟶ 141:
An especially popular Viterbi-decoded convolutional code, used at least since the [[Voyager program]], has a constraint length {{mvar|K}} of 7 and a rate ''r'' of 1/2.<ref>Butman, S. A., L. J. Deutsch, and R. L. Miller. [https://ipnpr.jpl.nasa.gov/progress_report/42-63/63H.PDF "Performance of concatenated codes for deep space missions." ] The Telecommunications and Data Acquisition Progress Report 42-63, March–April 1981 (1981): 33-39.</ref>
[[Mars Pathfinder]], [[Mars Exploration Rover]] and the [[Cassini–Huygens|Cassini probe]] to Saturn use a {{mvar|K}} of 15 and a rate of 1/6; this code performs about 2 dB better than the simpler <math>K=7</math> code at a cost of 256× in decoding complexity (compared to Voyager mission codes).
The convolutional code with a constraint length of 2 and a rate of 1/2 is used in [[GSM]] as an error correction technique.<ref>[http://www.scholarpedia.org/article/Global_system_for_mobile_communications_(GSM) Global system for mobile communications (GSM)]</ref>
== Punctured convolutional codes ==
{{See also|Punctured code}}
Line 157 ⟶ 153:
{| class="wikitable" |cellpadding="2"
!
!
!
|-
| 1/2<br />(No perf.)
|
{| border="0"
|-
| 1
Line 173 ⟶ 169:
| 2/3
|
{| border="0"
|-
| 1
| 0 |-
| 1
| 1 |}
| 6
Line 183 ⟶ 181:
| 3/4
|
{| border="0"
|-
| 1
| 0 | 1 |-
| 1
| 1 | 0 |}
| 5
Line 193 ⟶ 195:
| 5/6
|
{| border="0"
|-
| 1
| 0
| 1
| 0
| 1
|-
| 1
| 1
| 0
| 1
| 0
|}
| 4
Line 203 ⟶ 213:
| 7/8
|
{| border="0"
|-
| 1
| 0
| 0
| 0
| 1
| 0
| 1
|-
| 1
| 1
| 1
| 1
| 0
| 1
| 0
|}
| 3
Line 214 ⟶ 236:
For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every first bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard.
Punctured convolutional codes are widely used in the [[Communications satellite|satellite communications]], for example, in [[
Punctured convolutional codes are also called "perforated".
== Turbo codes: replacing convolutional codes ==
{{
[[File:Turbo codes scheme.png|thumb|350 px| A turbo code with component codes 13, 15.<ref>[http://www.scholarpedia.org/article/Turbo_codes Turbo code]</ref> Turbo codes get their name because the decoder uses feedback, like a turbo engine. Permutation means the same as the interleaving. C1 and C2 are recursive convolutional codes. Recursive and non-recursive convolutional codes are not so much different in BER performance, however, recursive type of is implemented in Turbo convolutional codes due to better interleaving properties.<ref>Benedetto, Sergio, and Guido Montorsi. "[https://web.archive.org/web/20190406181758/https://ieeexplore.ieee.org/abstract/document/390945/ Role of recursive convolutional codes in turbo codes]." Electronics Letters 31.11 (1995): 858-859.</ref>]]
Simple Viterbi-decoded convolutional codes are now giving way to [[turbo code]]s, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed by [[Noisy-channel coding theorem|Shannon's theorem]] with much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance. [[Concatenated error correction code|Concatenation]] with an outer algebraic code (e.g., [[Reed–Solomon error correction|Reed–Solomon]]) addresses the issue of [[error floor]]s inherent to turbo code designs.
== See also ==
* [[Quantum convolutional code]]
== References ==
* {{FS1037C}}
{{
== External links ==
* [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]], discusses convolutional codes in Chapter 48.
* [http://www.eccpage.com/ The Error Correcting Codes (ECC) Page]
*[https://web.archive.org/web/20140522015414/http://www.mathworks.fr/fr/help/comm/convolutional-coding.html Matlab explanations]
* [http://www.ni.com/white-paper/14917/en/ Fundamentals of Convolutional Decoders for Better Digital Communications]
* [http://web.mit.edu/6.02/www/s2009/handouts/labs/lab5.shtml Convolutional codes (MIT)]
* [http://www5.tu-ilmenau.de/nt/de/teachings/vorlesungen/itsc_master/folien/script.pdf Information Theory and Coding (TU Ilmenau)] {{Webarchive|url=https://web.archive.org/web/20170830033846/http://www5.tu-ilmenau.de/nt/de/teachings/vorlesungen/itsc_master/folien/script.pdf |date=2017-08-30 }}, discusses convolutional codes on page 48.
== Further reading ==
=== Publications ===
{{refbegin}}
* Francis, Michael. "Viterbi Decoder Block Decoding-Trellis Termination and Tail Biting." Xilinx XAPP551 v2. 0, DD (2005): 1-21.
|