Content deleted Content added
Robevans123 (talk | contribs) →Punctured convolutional codes: typo: achive => achieve & copy edit |
No edit summary |
||
(28 intermediate revisions by 22 users not shown) | |||
Line 1:
{{
{{
In [[Telecommunications|telecommunication]], a '''convolutional code''' is a type of [[
The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the base [[code rate]] and the depth (or memory) of the encoder <math>[n,k,K]</math>.
Convolutional codes are often described as continuous.
The code rate of a convolutional code is commonly modified via [[
== History ==
Convolutional codes were introduced in 1955 by [[Peter Elias]].
Recursive systematic convolutional codes were invented by [[Claude Berrou]] around 1991.
▲Convolutional codes were introduced in 1955 by [[Peter Elias]]. It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay. In 1967 [[Andrew Viterbi]] determined that convolutional codes could be maximum-likelihood decoded with reasonable complexity using time invariant trellis based decoders — the [[Viterbi algorithm]]. Other trellis-based decoder algorithms were later developed, including the [[BCJR]] decoding algorithm.
▲Recursive systematic convolutional codes were invented by [[Claude Berrou]] around 1991. These codes proved especially useful for iterative processing including the processing of concatenated codes such as [[turbo codes]]<ref>Benedetto, Sergio, and Guido Montorsi. "[https://ieeexplore.ieee.org/abstract/document/390945/ Role of recursive convolutional codes in turbo codes]." Electronics Letters 31.11 (1995): 858-859.</ref>.
Using the "convolutional" terminology, a classic convolutional code might be considered a [[Finite impulse response]] (FIR) filter, while a recursive convolutional code might be considered an [[Infinite impulse response]] (IIR) filter.
== Where convolutional codes are used ==
[[File:GSM convol code.png|thumb|right|400px|Stages of channel coding in GSM
Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as [[digital video]], radio, [[Mobile telephony|mobile communications]] (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)<ref>3rd Generation Partnership Project (September 2012). "3GGP TS45.001: Technical Specification Group GSM/EDGE Radio Access Network; Physical layer on the radio path; General description". Retrieved 2013-07-20.</ref><ref>Halonen, Timo, Javier Romero, and Juan Melero, eds. GSM, GPRS and EDGE performance: evolution towards 3G/UMTS. John Wiley & Sons, 2004.
▲[[File:GSM convol code.png|thumb|right|400px|Stages of channel coding in GSM <ref>Eberspächer J. et al. GSM-architecture, protocols and services. – John Wiley & Sons, 2008. - p.97</ref>. Block encoder and Parity check - error detection part. Convolutional encoder and Viterbi decoder - error correction part. [[Error_correction_code#Interleaving|Interleaving]] and Deinterleaving - code words separation increasing in time ___domain and to avoid bursty distortions.]]
== Convolutional encoding ==▼
▲Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as [[digital video]], radio, [[mobile communications]] (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)<ref>3rd Generation Partnership Project (September 2012). "3GGP TS45.001: Technical Specification Group GSM/EDGE Radio Access Network; Physical layer on the radio path; General description". Retrieved 2013-07-20.</ref><ref>Halonen, Timo, Javier Romero, and Juan Melero, eds. GSM, GPRS and EDGE performance: evolution towards 3G/UMTS. John Wiley & Sons, 2004. - p. 430</ref>) and [[satellite communications]].<ref>Butman, S. A., L. J. Deutsch, and R. L. Miller. [http://tda.jpl.nasa.gov/progress_report/42-63/63H.PDF "Performance of concatenated codes for deep space missions."] The Telecommunications and Data Acquisition Progress Report 42-63, March–April 1981 (1981): 33-39.</ref> These codes are often implemented in [[concatenated code|concatenation]] with a hard-decision code, particularly [[Reed–Solomon error correction|Reed–Solomon]]. Prior to [[turbo codes]] such constructions were the most efficient, coming closest to the [[Shannon-Hartley theorem|Shannon limit]].
To convolutionally encode data, start with ''k'' [[
▲==Convolutional encoding==
▲To convolutionally encode data, start with ''k'' [[memory register]]s, each holding one input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder has ''n'' modulo-2 [[Adder (electronics)|adder]]s (a modulo 2 adder can be implemented with a single [[Boolean logic|Boolean]] [[XOR gate]], where the logic is: {{math|1=0+0 = 0}}, {{math|1=0+1 = 1}}, {{math|1=1+0 = 1}}, {{math|1=1+1 = 0}}), and ''n'' [[generator polynomial]]s — one for each adder (see figure below). An input bit ''m''<sub>1</sub> is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs ''n'' symbols. These symbols may be transmitted or punctured depending on the desired code rate. Now [[bit shift]] all register values to the right (''m''<sub>1</sub> moves to ''m''<sub>0</sub>, ''m''<sub>0</sub> moves to ''m''<sub>−1</sub>) and wait for the next input bit. If there are no remaining input bits, the encoder continues shifting until all registers have returned to the zero state (flush bit termination).
[[File:Convolutional encoder non-recursive.png|frame|Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3]]
The figure below is a rate {{1/3}} ({{frac|''m''|''n''}}) encoder with constraint length (''k'') of 3.
:''n''<sub>1</sub> = ''m''<sub>1</sub> + ''m''<sub>0</sub> + ''m''<sub>−1</sub>
:''n''<sub>2</sub> = ''m''<sub>0</sub> + ''m''<sub>−1</sub>
Line 35 ⟶ 32:
* systematic repeats the structure of the message before encoding
* non-systematic changes the initial structure
Non-systematic convolutional codes are more popular due to better noise immunity. It relates to the free distance of the convolutional code.<ref>Moon, Todd K. "Error correction coding." Mathematical Methods and Algorithms. Jhon Wiley and Son (2005).
<gallery mode="packed" heights="150px">▼
File:Non-systematic convolutional code.png|thumb|A short illustration of non-systematic convolutional code.▼
File:Systematic convolutional code.png|thumb|A short illustration of systematic convolutional code.▼
▲File:Non-systematic convolutional code.png
▲File:Systematic convolutional code.png
</gallery>
== Recursive and non-recursive codes ==
The encoder on the picture above is a ''non-recursive'' encoder. Here's an example of a recursive one and as such it admits a feedback structure:
[[Image:Convolutional encoder recursive.svg|thumb|340px|none|Img.2. Rate 1/2 8-state recursive systematic convolutional encoder.
The example encoder is ''[[
Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice.
The example encoder in Img. 2. is an 8-state encoder because the 3 registers will create 8 possible encoder states (2<sup>3</sup>).
Recursive systematic convolutional (RSC) codes have become more popular due to their use in Turbo Codes. Recursive systematic codes are also referred to as pseudo-systematic codes.
Line 76 ⟶ 68:
Useful as constituent code in low error rate turbo codes for applications such as satellite links. Also suitable as SCCC outer code.
== Impulse response, transfer function, and constraint length ==
A convolutional encoder is called so because it performs a ''[[convolution]]'' of the input stream with the encoder's ''impulse responses'':
Line 98 ⟶ 89:
: <math> m = \max_i \operatorname{polydeg} (H_i(1/z)) \,</math>
where, for any [[rational function]] <math>f(z)
: <math> \operatorname{polydeg}(f) = \max (\deg(P), \deg(Q)) \,</math>.
Then {{mvar|m}} is the maximum of the [[degree of a polynomial|polynomial degrees]] of the
<math> H_i(1/z) \,</math>, and the ''constraint length'' is defined as <math> K = m + 1 \,</math>. For instance, in the first example the constraint length is 3, and in the second the constraint length is 4.
== Trellis diagram ==
{{
A convolutional encoder is a [[Finite-state machine|finite state machine]]. An encoder with ''n'' binary cells will have 2<sup>''n''</sup> states.
Line 113 ⟶ 105:
All possible transitions can be shown as below:
[[Image:Convolutional code trellis diagram.svg|left|thumb|340px
An actual encoded sequence can be represented as a path on this graph. One valid path is shown in red as an example.
Line 119 ⟶ 111:
This diagram gives us an idea about ''decoding'': if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearest ''correct'' (fitting the graph) sequence. The real decoding algorithms exploit this idea.
== Free distance and error distribution ==
[[File:Recnonrecber.png|thumb|right|300px|Theoretical bit-error rate curves of encoded QPSK (recursive and non-recursive, soft decision), additive white Gaussian noise channel. Curves are small distinguished due to approximately the same free distances and weights.]]
Line 128 ⟶ 120:
Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of ''t'' applies to a quantity of errors located relatively near to each other. That is, multiple groups of ''t'' errors can usually be fixed when they are relatively far apart.
Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder. The fact that errors appear as "bursts" should be accounted for when designing a [[Concatenated error correction code|concatenated code]] with an inner convolutional code. The popular solution for this problem is to [[
== Decoding convolutional codes ==
{{
[[File:
Several [[algorithm]]s exist for decoding convolutional codes. For relatively small values of ''k'', the [[Viterbi algorithm]] is universally used as it provides [[Maximum likelihood estimation|maximum likelihood]] performance and is highly parallelizable. Viterbi decoders are thus easy to implement in [[Very-large-scale integration|VLSI]] hardware and in software on CPUs with [[Single instruction, multiple data|SIMD]] instruction sets.
Longer constraint length codes are more practically decoded with any of several [[sequential decoding]] algorithms, of which the [[Robert Fano|Fano]] algorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in the [[Pioneer program]] of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with large [[Reed–Solomon error correction]] codes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates.
Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of the [[Viterbi algorithm#Soft output Viterbi algorithm|Soft output Viterbi algorithm]].
== Popular convolutional codes ==
[[File:Conv code 177 133.png
[[File:Lenss.png|thumb|right|300px|Theoretical bit-error rate curves of encoded QPSK (soft decision), additive white Gaussian noise channel. Longer constraint lengths produce more powerful codes, but the [[complexity]] of the Viterbi algorithm [[exponential growth|increases exponentially]] with constraint lengths, limiting these more powerful codes to deep space missions where the extra performance is easily worth the increased decoder complexity.]]
In fact, predefined convolutional codes structures obtained during scientific researches are used in the industry. This relates to the possibility to select catastrophic convolutional codes (causes larger number of errors).
An especially popular Viterbi-decoded convolutional code, used at least since the [[Voyager program]], has a constraint length
[[Mars Pathfinder]], [[Mars Exploration Rover]] and the [[Cassini–Huygens|Cassini probe]] to Saturn use a
The convolutional code with a constraint length of 2 and a rate of 1/2 is used in [[GSM]] as an error correction technique.<ref>[http://www.scholarpedia.org/article/Global_system_for_mobile_communications_(GSM) Global system for mobile communications (GSM)]</ref>
== Punctured convolutional codes ==
▲{{See also|Puncturing}}
[[File:Soft34.png|thumb|right|300px|Convolutional codes with 1/2 and 3/4 code rates (and constraint length 7, Soft decision, 4-QAM / QPSK / OQPSK).<ref>[https://ch.mathworks.com/help/comm/ug/punctured-convolutional-coding-1.html Punctured Convolutional Coding (MathWorks)]</ref>]]
Convolutional code with any code rate can be designed based on
{| class="wikitable" |cellpadding="2"
!
!
!
|-
| 1/2<br />(No perf.)
|
{| border="0"
|-
| 1
Line 178 ⟶ 169:
| 2/3
|
{| border="0"
|-
| 1
| 0 |-
| 1
| 1 |}
| 6
Line 188 ⟶ 181:
| 3/4
|
{| border="0"
|-
| 1
| 0 | 1 |-
| 1
| 1 | 0 |}
| 5
Line 198 ⟶ 195:
| 5/6
|
{| border="0"
|-
| 1
| 0
| 1
| 0
| 1
|-
| 1
| 1
| 0
| 1
| 0
|}
| 4
Line 208 ⟶ 213:
| 7/8
|
{| border="0"
|-
| 1
| 0
| 0
| 0
| 1
| 0
| 1
|-
| 1
| 1
| 1
| 1
| 0
| 1
| 0
|}
| 3
Line 219 ⟶ 236:
For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every first bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard.
Punctured convolutional codes are widely used in the [[Communications satellite|satellite communications]], for example, in [[
Punctured convolutional codes are also called "perforated".
== Turbo codes: replacing convolutional codes ==
{{
[[File:Turbo codes scheme.png|thumb|350 px| A turbo code with component codes 13, 15.<ref>[http://www.scholarpedia.org/article/Turbo_codes Turbo code]</ref> Turbo codes get their name because the decoder uses feedback, like a turbo engine. Permutation means the same as the interleaving. C1 and C2 are recursive convolutional codes. Recursive and non-recursive convolutional codes are not so much different in BER performance, however, recursive type of is implemented in Turbo convolutional codes due to better interleaving properties.<ref>Benedetto, Sergio, and Guido Montorsi. "[https://web.archive.org/web/20190406181758/https://ieeexplore.ieee.org/abstract/document/390945/ Role of recursive convolutional codes in turbo codes]." Electronics Letters 31.11 (1995): 858-859.</ref>
Simple Viterbi-decoded convolutional codes are now giving way to [[turbo code]]s, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed by [[Noisy-channel coding theorem|Shannon's theorem]] with much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance. [[Concatenated error correction code|Concatenation]] with an outer algebraic code (e.g., [[Reed–Solomon error correction|Reed–Solomon]]) addresses the issue of [[error floor]]s inherent to turbo code designs.
== See also ==
* [[Quantum convolutional code]]
== References ==
* {{FS1037C}}
{{
== External links ==
* [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]], discusses convolutional codes in Chapter 48.
* [http://www.eccpage.com/ The Error Correcting Codes (ECC) Page]
*[https://web.archive.org/web/20140522015414/http://www.mathworks.fr/fr/help/comm/convolutional-coding.html Matlab explanations]
* [http://www.ni.com/white-paper/14917/en/ Fundamentals of Convolutional Decoders for Better Digital Communications]
* [http://web.mit.edu/6.02/www/s2009/handouts/labs/lab5.shtml Convolutional codes (MIT)]
* [http://www5.tu-ilmenau.de/nt/de/teachings/vorlesungen/itsc_master/folien/script.pdf Information Theory and Coding (TU Ilmenau)] {{Webarchive|url=https://web.archive.org/web/20170830033846/http://www5.tu-ilmenau.de/nt/de/teachings/vorlesungen/itsc_master/folien/script.pdf |date=2017-08-30 }}, discusses convolutional codes on page 48.
== Further reading ==
=== Publications ===▼
{{refbegin}}
▲===Publications===
* Francis, Michael. "Viterbi Decoder Block Decoding-Trellis Termination and Tail Biting." Xilinx XAPP551 v2. 0, DD (2005): 1-21.
* Chen, Qingchun, Wai Ho Mow, and Pingzhi Fan. "Some new results on recursive convolutional codes and their applications." Information Theory Workshop, 2006. ITW'06 Chengdu. IEEE. IEEE, 2006.
Line 254 ⟶ 272:
* Modestino, J., and Shou Mui. "Convolutional code performance in the Rician fading channel." IEEE Transactions on Communications 24.6 (1976): 592-606.
* Chen, Yuh-Long, and Che-Ho Wei. "Performance evaluation of convolutional codes with MPSK on Rician fading channels." IEE Proceedings F-Communications, Radar and Signal Processing. Vol. 134. No. 2. IET, 1987.
{{refend}}
[[Category:Error detection and correction]]
|