Content deleted Content added
→Further reading: ref |
mNo edit summary |
||
Line 3:
In [[telecommunication]], a '''convolutional code''' is a type of [[Forward error correction|error-correcting code]] that generates parity symbols via the sliding application of a [[Algebraic normal form|boolean polynomial]] function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates [[Trellis (graph)|trellis]] decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.
The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the base code rate and the depth (or memory) of the encoder <math>[n,k,K]</math>. The base code rate is typically given as <math>n/k</math>, where {{mvar|n}} is the raw input data rate and {{mvar|k}} is the data rate of output channel encoded stream. {{mvar|n}} is less than {{mvar|k}} because channel coding inserts redundancy in the input bits. The memory is often called the "constraint length" {{mvar|K}}, where the output is a function of the current input as well as the previous <math>K-1</math> inputs. The depth may also be given as the number of memory elements {{mvar|v}} in the polynomial or the maximum possible number of states of the encoder (typically
Convolutional codes are often described as continuous. However, it may also be said that convolutional codes have arbitrary block length, rather than being continuous, since most real-world convolutional encoding is performed on blocks of data. Convolutionally encoded block codes typically employ termination. The arbitrary block length of convolutional codes can also be contrasted to classic [[block code]]s, which generally have fixed block lengths that are determined by algebraic properties.
Line 19:
==Where convolutional codes are used==
[[File:GSM convol code.png|thumb|right|400px|Stages of channel coding in GSM.<ref>Eberspächer J. et al. GSM-architecture, protocols and services.
Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as [[digital video]], radio, [[mobile communications]] (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)<ref>3rd Generation Partnership Project (September 2012). "3GGP TS45.001: Technical Specification Group GSM/EDGE Radio Access Network; Physical layer on the radio path; General description". Retrieved 2013-07-20.</ref><ref>Halonen, Timo, Javier Romero, and Juan Melero, eds. GSM, GPRS and EDGE performance: evolution towards 3G/UMTS. John Wiley & Sons, 2004.
==Convolutional encoding==
Line 37:
* non-systematic changes the initial structure
Non-systematic convolutional codes are more popular due to better noise immunity. It relates to the free distance of the convolutional code.<ref>Moon, Todd K. "Error correction coding." Mathematical Methods and Algorithms. Jhon Wiley and Son (2005).
<gallery heights="150">
Line 50:
[[Image:Convolutional encoder recursive.svg|thumb|340px|none|Img.2. Rate 1/2 8-state recursive systematic convolutional encoder. Used as constituent code in 3GPP 25.212 Turbo Code.]]
The example encoder is ''[[
Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice.
Line 94:
: <math> m = \max_i \operatorname{polydeg} (H_i(1/z)) \,</math>
where, for any [[rational function]] <math>f(z)
: <math> \operatorname{polydeg}(f) = \max (\deg(P), \deg(Q)) \,</math>.
Line 127:
==Decoding convolutional codes==
{{
[[File:Convolutional codes PSK QAM LLR.svg|thumb|right|300px| Bit error ratio curves for convolutional codes with different options of digital modulations ([[Phase-shift keying|QPSK, 8-PSK]], [[Quadrature amplitude modulation|16-QAM, 64-QAM]]) and [[Likelihood function#Log-likelihood|LLR]] Algorithms.<ref>[https://www.mathworks.com/help/comm/examples/llr-vs-hard-decision-demodulation.html LLR vs. Hard Decision Demodulation (MathWorks)]</ref><ref>[https://www.mathworks.com/help/comm/ug/estimate-ber-for-hard-and-soft-decision-viterbi-decoding.html Estimate BER for Hard and Soft Decision Viterbi Decoding (MathWorks)]</ref> (Exact<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6yjx Digital modulation: Exact LLR Algorithm (MathWorks)]</ref> and Approximate<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6ymu Digital modulation: Approximate LLR Algorithm (MathWorks)]</ref>) over additive white Gaussian noise channel.]]
Line 150:
==Punctured convolutional codes==
{{See also|Punctured code}}
[[File:Soft34.png|thumb|right|300px|Convolutional codes with 1/2 and 3/4 code rates (and constraint length 7, Soft decision, 4-QAM / QPSK / OQPSK).<ref>[https://ch.mathworks.com/help/comm/ug/punctured-convolutional-coding-1.html Punctured Convolutional Coding (MathWorks)]</ref>]]
Convolutional code with any code rate can be designed based on polynomial selection;<ref>{{Cite web|url=https://www.mathworks.com/help/comm/ref/poly2trellis.html|title=Convert convolutional code polynomials to trellis description
{| class="wikitable" |cellpadding="2"
|