Error correction code: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: doi added to citation with #oabot.
Citation bot (talk | contribs)
Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 843/1032
 
(47 intermediate revisions by 31 users not shown)
Line 2:
{{redirect|Interleaver|the fiber-optic device|optical interleaver}}
{{Use dmy dates|date=August 2022}}
In [[computing]], [[telecommunication]], [[information theory]], and [[coding theory]], '''forward error correction''' ('''FEC''') or '''channel coding'''<ref>{{cite journal |author1=Charles Wang |author2=Dean Sklar |author3=Diana Johnson |title=Forward Error-Correction Coding |journal=Crosslink |publisher=The Aerospace Corporation |volume=3 |issue=1 |date=Winter 2001–2002 |url=http://www.aero.org/publications/crosslink/winter2002/04.html |quoteaccess-date=5 March 2006 |archive-url=https://web.archive.org/web/20120314085127/http://www.aero.org/publications/crosslink/winter2002/04.html |archive-date=14 March 2012 |url-status=dead }}</ref><ref>{{cite journal |author1=Charles Wang |author2=Dean Sklar |author3=Diana Johnson |title=Forward Error-Correction Coding |journal=Crosslink |publisher=The Aerospace Corporation |volume=3 |issue=1 |date=Winter 2001–2002 |url=[http://www.aero.org/publications/crosslink/winter2002/04_sidebar1.html | quote=How Forward Error-Correcting Codes Work] |access-date=5 March 2006 |archive-url=https://web.archive.org/web/20120314085127/http://www.aero.org/publications/crosslink/winter2002/0404_sidebar1.html |archive-date=14 March 2012 |url-status=dead }}</ref><ref name=":0">{{Cite web|url=https://www.accelercomm.com/overview-channel-coding|title=Overview of Channel Coding|last=Maunder|first=Robert|date=2016}}</ref> is a technique used for [[error control|controlling errors]] in [[data transmission]] over unreliable or noisy [[communication channel]]s. The central idea is that the sender encodes the message in a [[Redundancy (information theory)|redundant]] way, most often by using an ECC.
In [[computing]], [[telecommunication]], [[information theory]], and [[coding theory]], an '''error correction code''', sometimes '''error correcting code''', ('''ECC''') is used for [[error control|controlling errors]] in data over unreliable or noisy [[communication channel]]s.<ref>{{cite book |author-last1=Glover |author-first1=Neal |author-last2=Dudley |author-first2=Trent |title=Practical Error Correction Design For Engineers |edition=Revision 1.1, 2nd |publisher=[[Cirrus Logic]] |___location=CO, USA |date=1990 |isbn=0-927239-00-0 }}</ref><ref name="Hamming">{{cite journal |author-last=Hamming |author-first=Richard Wesley |author-link=Richard Wesley Hamming |title=Error Detecting and Error Correcting Codes |journal=[[Bell System Technical Journal]] |volume=29 |issue=2 |pages=147–160 |publisher=[[AT&T]] |___location=USA |date=April 1950 |doi=10.1002/j.1538-7305.1950.tb00463.x|s2cid=61141773 }}</ref> The central idea is the sender encodes the message with [[Redundancy (information theory)|redundant information]] in the form of an ECC. The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and often to correct these errors without retransmission. The American mathematician [[Richard Hamming]] pioneered this field in the 1940s and invented the first error-correcting code in 1950: the [[Hamming (7,4) code]].<ref name="Hamming"/>
 
InThe [[computing]],central [[telecommunication]],idea is that the sender encodes the message in a [[Redundancy (information theory)|redundant]] way, andmost [[codingoften theory]],by using an '''error correction code''', sometimesor '''error correcting code''', ('''ECC''') is used for [[error control|controlling errors]] in data over unreliable or noisy [[communication channel]]s.<ref>{{cite book |author-last1=Glover |author-first1=Neal |author-last2=Dudley |author-first2=Trent |title=Practical Error Correction Design For Engineers |edition=Revision 1.1, 2nd |publisher=[[Cirrus Logic]] |___location=CO, USA |date=1990 |isbn=0-927239-00-0 }}</ref><ref name="Hamming">{{cite journal |author-last=Hamming |author-first=Richard Wesley |author-link=Richard Wesley Hamming |title=Error Detecting and Error Correcting Codes |journal=[[Bell System Technical Journal]] |volume=29 |issue=2 |pages=147–160 |publisher=[[AT&T]] |___location=USA |date=April 1950 |doi=10.1002/j.1538-7305.1950.tb00463.x|s2cid=61141773 |hdl=10945/46756 |hdl-access=free }}</ref> The centralredundancy idea isallows the senderreceiver encodesnot theonly message withto [[Redundancyerror (information theory)detection|redundantdetect informationerrors]] in the form of an ECC. The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, andbut often to correct thesea errorslimited withoutnumber retransmissionof errors. TheTherefore American mathematiciana [[Richardreverse Hammingchannel]] pioneeredto thisrequest fieldre-transmission inmay thenot 1940sbe andneeded. inventedThe thecost firstis error-correctinga codefixed, inhigher 1950:forward thechannel [[Hamming (7,4) code]]bandwidth.<ref name="Hamming"/>
ECC contrasts with [[error detection]] in that errors that are encountered can be corrected, not simply detected. The advantage is that a system using ECC does not require a [[reverse channel]] to request retransmission of data when an error occurs. The downside is that there is a fixed overhead that is added to the message, thereby requiring a higher forward-channel bandwidth. ECC is therefore applied in situations where retransmissions are costly or impossible, such as one-way communication links and when transmitting to multiple receivers in [[multicast]]. Long-latency connections also benefit; in the case of a satellite orbiting around [[Uranus]], retransmission due to errors can create a delay of five hours. ECC information is usually added to [[mass storage]] devices to enable recovery of corrupted data, is widely used in [[modem]]s, and is used on systems where the primary memory is [[ECC memory]].
 
The American mathematician [[Richard Hamming]] pioneered this field in the 1940s and invented the first error-correcting code in 1950: the [[Hamming (7,4) code]].<ref name="Hamming"/>
ECC processing in a receiver may be applied to a digital bitstream or in the demodulation of a digitally modulated carrier. For the latter, ECC is an integral part of the initial [[Analog-to-digital converter|analog-to-digital conversion]] in the receiver. The [[Viterbi decoder]] implements a [[Error correction code#Types of ECC|soft-decision algorithm]] to demodulate digital data from an analog signal corrupted by noise. Many ECC encoders/decoders can also generate a [[bit-error rate]] (BER) signal, which can be used as feedback to fine-tune the analog receiving electronics.
 
FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in [[multicast]].
The maximum fractions of errors or of missing bits that can be corrected are determined by the design of the ECC code, so different error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio. The [[noisy-channel coding theorem]] of [[Claude Shannon]] can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some [[Polar code (coding theory)|advanced ECC systems as of 2016]]<ref name=":0" /> come very close to the theoretical maximum.
 
Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used in [[modem]]s and in [[cellular network]]s.
==Forward error correction==
In [[telecommunication]], [[information theory]], and [[coding theory]], '''forward error correction''' ('''FEC''') or '''channel coding'''<ref>{{cite journal |author1=Charles Wang |author2=Dean Sklar |author3=Diana Johnson |title=Forward Error-Correction Coding |journal=Crosslink |publisher=The Aerospace Corporation |volume=3 |issue=1 |date=Winter 2001–2002 |url=http://www.aero.org/publications/crosslink/winter2002/04.html |quote=[http://www.aero.org/publications/crosslink/winter2002/04_sidebar1.html How Forward Error-Correcting Codes Work] |access-date=5 March 2006 |archive-url=https://web.archive.org/web/20120314085127/http://www.aero.org/publications/crosslink/winter2002/04.html |archive-date=14 March 2012 |url-status=dead }}</ref><ref name=":0">{{Cite web|url=https://www.accelercomm.com/overview-channel-coding|title=Overview of Channel Coding|last=Maunder|first=Robert|date=2016}}</ref> is a technique used for [[error control|controlling errors]] in [[data transmission]] over unreliable or noisy [[communication channel]]s. The central idea is that the sender encodes the message in a [[Redundancy (information theory)|redundant]] way, most often by using an ECC.
 
ECCFEC processing in a receiver may be applied to a digital bitstreambit stream or in the demodulation of a digitally modulated carrier. For the latter, ECCFEC is an integral part of the initial [[Analog-to-digital converter|analog-to-digital conversion]] in the receiver. The [[Viterbi decoder]] implements a [[ErrorSoft-decision correction code#Types of ECCdecoder|soft-decision algorithm]] to demodulate digital data from an analog signal corrupted by noise. Many ECCFEC encoders/decoders can also generate a [[bit-error rate]] (BER) signal, which can be used as feedback to fine-tune the analog receiving electronics.
The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and often to correct these errors without re-transmission. FEC gives the receiver the ability to correct errors without needing a [[reverse channel]] to request re-transmission of data, but at the cost of a fixed, higher forward channel bandwidth. FEC is therefore applied in situations where re-transmissions are costly or impossible, such as one-way communication links and when transmitting to multiple receivers in [[multicast]]. FEC information is usually added to [[mass storage]] (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, is widely used in [[modem]]s, is used on systems where the primary memory is [[ECC memory]] and in broadcast situations, where the receiver does not have capabilities to request re-transmission or doing so would induce significant latency. For example, in the case of a satellite orbiting [[Uranus]], a re-transmission because of decoding errors would create a delay of at least 5 hours.
 
FEC information is added to [[mass storage]] (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used as [[ECC memory|ECC]] [[computer memory]] on systems that require special provisions for reliability.
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial [[Analog-to-digital converter|analog-to-digital conversion]] in the receiver. The [[Viterbi decoder]] implements a [[Soft-decision decoder|soft-decision algorithm]] to demodulate digital data from an analog signal corrupted by noise. Many FEC coders can also generate a [[bit-error rate]] (BER) signal which can be used as feedback to fine-tune the analog receiving electronics.
The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective [[signal-to-noise ratio]]. The [[noisy-channel coding theorem]] of [[Claude Shannon]] answerscan thebe questionused ofto howcompute muchthe bandwidth is left formaximum dataachievable communication whilebandwidth usingfor thea mostgiven efficientmaximum code that turns the decodingacceptable error probability to zero. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. HisHowever, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. However, afterAfter years of research, some advanced FEC systems like [[Polar code (coding theory)|polar code]]<ref name=":0" /> achievecome very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame.
 
==How it worksMethod==
ECC is accomplished by adding [[redundancy (information theory)|redundancy]] to the transmitted information using an algorithm. A redundant bit may be a complexcomplicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are '''[[systematic code|systematic]]''', while those that do not are '''non-systematic'''.
 
A simplistic example of ECC is to transmit each data bit 3three times, which is known as a (3,1) [[repetition code]]. Through a noisy channel, a receiver might see 8eight versions of the output, see table below.
 
{| class=wikitable
Line 53 ⟶ 52:
|}
This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is:
* Up to 1one bit of triplet in error, or
* up to 2two bits of triplet omitted (cases not shown in table).
Though simple to implement and widely used, this [[triple modular redundancy]] is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of 2two to 8eight bits).
 
==Averaging noise to reduce errors==
Line 67 ⟶ 66:
[[adaptive modulation and coding]] uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
 
==Types of ECC==
{{Main|Block code|Convolutional code}}
 
[[File:Block code error correction.png|thumb|upright=1.3|A block code (specifically a [[Hamming code]]) where redundant bits are added as a block to the end of the initial message]]
[[File:Convolutional code error correction.png|thumb|A continuous code [[convolutional code]] where redundant bits are added continuously into the structure of the code word]]
 
The two main categories of ECC codes are [[block code]]s and [[convolutional code]]s.
Line 79 ⟶ 78:
There are many types of block codes; [[Reed–Solomon error correction|Reed–Solomon coding]] is noteworthy for its widespread use in [[compact disc]]s, [[DVD]]s, and [[hard disk drive#Error rates and handling|hard disk drives]]. Other examples of classical block codes include [[Golay code (disambiguation)|Golay]], [[BCH code|BCH]], [[Multidimensional parity-check code|Multidimensional parity]], and [[Hamming code]]s.
 
Hamming ECC is commonly used to correct [[ECC memory]] and early SLC [[NAND flash]] memory errors.<ref>[http://www.eetasia.com/ART_8800575062_499486_AN_7549c493.HTM "Hamming codes for NAND flash memory devices"] {{Webarchive|url=https://web.archive.org/web/20160821122453/http://www.eetasia.com/ART_8800575062_499486_AN_7549c493.HTM |date=21 August 2016 }}. EE Times-Asia. Apparently based on [http://www.micron.com/~/media/Documents/Products/Technical%20Note/NAND%20Flash/tn2908_NAND_hamming_ECC_code.pdf "Micron Technical Note TN-29-08: Hamming Codes for NAND Flash Memory Devices"] {{Webarchive|url=https://web.archive.org/web/20170829073235/http://www.micron.com/~/media/Documents/Products/Technical%20Note/NAND%20Flash/tn2908_NAND_hamming_ECC_code.pdf |date=29 August 2017 }}. 2005. Both say: "The Hamming algorithm is an industry-accepted method for error detection and correction in many SLC NAND flash-based applications."</ref>
This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliable [[single-level cell]] (SLC) NAND.
Denser [[multi-level cell]] (MLC) NAND may use multi-bit correcting ECC such as [[BCH orcode|BCH]], Reed–Solomon, or [[Low-density parity-check code|LDPC]].<ref name="spansion">{{cite web|url=http://www.spansion.com/Support/Application%20Notes/Types_of_ECC_Used_on_Flash_AN.pdf|title =What Types of ECC Should Be Used on Flash Memory?|publisher=Spansion|format=Application note|year=2011|quote=Both Reed–Solomon algorithm and BCH algorithm are common ECC choices for MLC NAND flash. ... Hamming based block codes are the most commonly used ECC for SLC.... both Reed–Solomon and BCH are able to handle multiple errors and are widely used on MLC flash.}}</ref><ref>{{cite web|author=Jim Cooke|url=https://cushychicken.github.io/assets/cooke_inconvenient_truths.pdf |title=The Inconvenient Truths of NAND Flash Memory|date=August 2007|page=28|quote=For SLC, a code with a correction threshold of 1 is sufficient. t=4 required ... for MLC.}}</ref><ref>https://www.thinkmind.org/articles/icons_2015_5_20_40055.pdf {{Bare URL PDF|date=July 2025}}</ref> NOR Flashflash typically does not use any error correction.<ref name="spansion"/>
 
Classical block codes are usually decoded using '''hard-decision''' algorithms,<ref>{{cite journal |author-last1=Baldi |author-first1=M. |author-last2=Chiaraluce |author-first2=F. |title=A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions |journal=[[International Journal of Digital Multimedia Broadcasting]] |volume=2008 |pages=1–12 |date=2008 |doi=10.1155/2008/957846 |doi-access=free }}</ref> which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using '''soft-decision''' algorithms like the Viterbi, MAP or [[BCJR algorithm|BCJR]] algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Line 90 ⟶ 89:
In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such as [[LDPC codes]] lack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates.
 
Most [[forward error correction]] codes correct only bit-flips, but not bit-insertions or bit-deletions.
In this setting, the [[Hamming distance]] is the appropriate way to measure the [[bit error rate]].
A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes.
Line 104 ⟶ 103:
One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:<ref name="shannon paper">{{cite journal|first=C. E.|last=Shannon|title=A mathematical theory of communication|journal=[[Bell System Technical Journal]]|volume=27|issue=3–4|pages=379–423 & 623–656|date=1948|url=http://www.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf|doi=10.1002/j.1538-7305.1948.tb01338.x|hdl=11858/00-001M-0000-002C-4314-2|hdl-access=free}}</ref> His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
 
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.<ref>{{Cite conference |title=Optimizing the code rate for achieving energy-efficient wireless communications |first1=F. |last1=Rosas |first2=G. |last2=Brante |first3=R. D. |last3=Souza |first4=C. |last4=Oberli |url=https://ieeexplore.ieee.org/document/6952166 |date=2014 |pages=775–780 |doi=10.1109/WCNC.2014.6952166 |isbn=978-1-4799-3083-8 |book-title=Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC)}}</ref>
 
==Concatenated ECC codes for improved performance==
Line 135 ⟶ 134:
 
[[Locally decodable code]]s are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. [[Locally testable code]]s are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
 
Not all locally decodable codes (LDCs) are locally testable codes (LTCs)<ref name="decNotTest">{{cite web |last1=Kaufman |first1=Tali |author1-link=Tali Kaufman |last2=Viderman |first2=Michael |title=Locally Testable vs. Locally Decodable Codes |url=http://eccc.hpi-web.de/report/2010/130/revision/1/download/}}</ref> neither locally correctable codes (LCCs),<ref>{{Cite web |last=Brubaker |first=Ben |date=2024-01-09 |title='Magical' Error Correction Scheme Proved Inherently Inefficient |url=https://www.quantamagazine.org/magical-error-correction-scheme-proved-inherently-inefficient-20240109/ |access-date=2024-01-09 |website=Quanta Magazine |language=en}}</ref> q-query LCCs are bounded exponentially<ref>{{Cite arXiv |last1=Kothari |first1=Pravesh K. |last2=Manohar |first2=Peter |date=2023 |title=An Exponential Lower Bound for Linear 3-Query Locally Correctable Codes |class=cs.CC |eprint=2311.00558}}</ref><ref>{{Cite book |last1=Kerenidis |first1=Iordanis |last2=de Wolf |first2=Ronald |chapter=Exponential lower bound for 2-query locally decodable codes via a quantum argument |date=2003-06-09 |title=Proceedings of the thirty-fifth annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/780542.780560 |language=en |publisher=ACM |pages=106–115 |doi=10.1145/780542.780560 |arxiv=quant-ph/0208062 |isbn=978-1-58113-674-6|s2cid=10585919 }}</ref> while LDCs can have [[Subexponential time|subexponential]] lengths.<ref>{{Cite journal |last=Yekhanin |first=Sergey |date=February 2008 |title=Towards 3-query locally decodable codes of subexponential length |url=https://dl.acm.org/doi/10.1145/1326554.1326555 |journal=Journal of the ACM |language=en |volume=55 |issue=1 |pages=1–16 |doi=10.1145/1326554.1326555 |s2cid=14617710 |issn=0004-5411|url-access=subscription }}</ref><ref>{{Cite book |last=Efremenko |first=Klim |chapter=3-query locally decodable codes of subexponential length |date=2009-05-31 |title=Proceedings of the forty-first annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/1536414.1536422 |journal=[[Journal of the ACM]] |language=en |publisher=ACM |pages=39–44 |doi=10.1145/1536414.1536422 |isbn=978-1-60558-506-2|s2cid=263865692 }}</ref>
 
== Interleaving ==
{{redirect|Interleaver|the fiber-optic device|optical interleaver}}
 
[[File:Interleaving1.png|right|upright=2.25|thumb|A short illustration of the interleaving idea]]
 
Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Many [[communication channel]]s are not memoryless: errors typically occur in [[burst error|burst]]s rather than independently. If the number of errors within a [[Code word (communication)|code word]] exceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a more [[Uniform distribution (continuous)|uniform distribution]] of errors.<ref name="turbo-principles">{{cite book |author-first1=B. |author-last1=Vucetic |author-first2=J. |author-last2=Yuan |title=Turbo codes: principles and applications |publisher=[[Springer Verlag]] |isbn=978-0-7923-7868-6 |date=2000}}</ref> Therefore, interleaving is widely used for [[burst error-correcting code|burst error-correction]].
 
The analysis of modern iterated codes, like [[turbo code]]s and [[LDPC code]]s, typically assumes an independent distribution of errors.<ref>{{cite journal |author-first1=Michael |author-last1=Luby |author-link1=Michael Luby |author-first2=M. |author-last2=Mitzenmacher |author-first3=A. |author-last3=Shokrollahi |author-first4=D. |author-last4=Spielman |author-first5=V. |author-last5=Stemann |title=Practical Loss-Resilient Codes |journal=Proc. 29th Annual Association for Computing Machinery (ACM) Symposium on Theory of Computation |date=1997}}</ref> Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.<ref>{{Cite journal |title=Digital Video Broadcast (DVB); Second generation framing structure, channel coding and modulation systems for Broadcasting, Interactive Services, News Gathering and other satellite broadband applications (DVB-S2) |journal=En 302 307 |issue=V1.2.1 |publisher=[[ETSI]] |date=April 2009}}</ref>
Line 212 ⟶ 213:
| 3 (single-error correcting) || [[Triple modular redundancy]]
|-
| 3 (single-error correcting) || perfectPerfect Hamming such as [[Hamming(7,4)]]
|-
| 4 ([[SECDED]]) || Extended Hamming
Line 220 ⟶ 221:
| 6 (double-error correct-/triple error detect) || [[Nordstrom-Robinson code]]
|-
| 7 (three-error correcting) || perfectPerfect [[binary Golay code]]
|-
| 8 (TECFED) || extendedExtended [[binary Golay code]]
|}
* [[AN codes]]
* [[Algebraic geometry code]]
* [[BCH code]], which can be designed to correct any arbitrary number of errors per code block.
* [[Barker code]] used for radar, telemetry, ultra sound, Wifi, DSSS mobile phone networks, GPS etc.
Line 234 ⟶ 236:
* [[Group code]]s
* [[Golay code (disambiguation)|Golay code]]s, of which the [[Binary Golay code]] is of practical interest
* [[Binary Goppa code|Goppa code]], used in the [[McEliece cryptosystem]]
* [[Hadamard code]]
* [[Hagelbarger code]]
Line 245 ⟶ 247:
* [[LT code]], which is a near-optimal [[Fountain code|rateless erasure correcting code (Fountain code)]]
* [[m of n codes]]
* [[Nordstrom-Robinson code]], used in Geometry and Group Theory<ref>{{Citation |author-first1=A.W. |author-last1=Nordstrom |author-first2=J.P. |author-last2=Robinson |title=An optimum nonlinear code |journal=Information and Control |date=1967 |volume=11 |issue=5–6 |pages=613–616 |doi=10.1016/S0019-9958(67)90835-2 |url=https://doi.org/10.1016/S0019-9958(67)90835-2 |doi-access=free }}</ref>
* [[Online code]], a near-optimal [[Fountain code|rateless erasure correcting code]]
* [[Polar code (coding theory)]]
Line 257 ⟶ 259:
* [[Turbo code]]
* [[Walsh–Hadamard code]]
* [[Cyclic redundancy check]]s (CRCs) can correct 1-bit errors for messages at most <math>2^{n-1}-1</math> bits long for optimal generator polynomials of degree <math>n</math>, see [[{{section link|Mathematics of cyclic redundancy checks#|Bitfilters]]}}
* [[Locally Recoverable Codes]]
* [[Message authentication code]]
 
==See also==
* [[Burst error-correcting code]]
* [[Code rate]]
* [[Erasure code]]s
* [[Soft-decision decoder]]
* [[Burst error-correcting code]]
* [[Error detection and correction]]
* [[Error-correcting codes with feedback]]
* [[Linear code]]
==Forward* [[Quantum error correction==]]
* [[Soft-decision decoder]]
 
==References==
Line 279 ⟶ 285:
* [https://web.archive.org/web/20160304050018/http://www.eetasia.com/ARTICLES/2004NOV/A/2004NOV29_MEM_AN10.PDF?SOURCES=DOWNLOAD "Error Correction Code in NAND Flash memories"] 2004-11-29
* [http://perspectives.mvdirona.com/2012/02/observations-on-errors-corrections-trust-of-dependent-systems/ Observations on Errors, Corrections, & Trust of Dependent Systems], by James Hamilton, 2012-02-26
* ''[[Sphere Packings, Lattices and Groups]],'' Byby J. H. Conway, Neil James Alexander Sloane, [[Springer Science & Business Media]], 2013-03-09 – Mathematics – 682 pages.
 
==External links==
* {{Cite web |author-last=Morelos-Zaragoza |author-first=Robert |date=2004 |url=http://www.eccpage.com/ |title=The Correcting Codes (ECC) Page |access-date=2006-03-05}}
* [https://errorcorrectionzoo.org/ error correction zoo]. Database of error correcting codes.
* [https://github.com/supermihi/lpdec lpdec: library for LP decoding and related things (Python)]