Error correction code: Difference between revisions

Content deleted Content added
No edit summary
Tags: Reverted Mobile edit Mobile web edit
Citation bot (talk | contribs)
Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 843/1032
 
(12 intermediate revisions by 10 users not shown)
Line 18:
The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective [[signal-to-noise ratio]]. The [[noisy-channel coding theorem]] of [[Claude Shannon]] can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems like [[Polar code (coding theory)|polar code]]<ref name=":0" /> come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame.
 
==Method==
fj
ECC is accomplished by adding [[redundancy (information theory)|redundancy]] to the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are '''[[systematic code|systematic]]''', while those that do not are '''non-systematic'''.
 
A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1) [[repetition code]]. Through a noisy channel, a receiver might see eight versions of the output, see table below.
 
{| class=wikitable
! Triplet received
! Interpreted as
|-
|000
|0 (error-free)
|-
|001
|0
|-
|010
|0
|-
|100
|0
|-
|111
|1 (error-free)
|-
|110
|1
|-
|101
|1
|-
|011
|1
|}
This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is:
* Up to one bit of triplet in error, or
* up to two bits of triplet omitted (cases not shown in table).
Though simple to implement and widely used, this [[triple modular redundancy]] is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits).
 
==Averaging noise to reduce errors==
Line 42 ⟶ 78:
There are many types of block codes; [[Reed–Solomon error correction|Reed–Solomon coding]] is noteworthy for its widespread use in [[compact disc]]s, [[DVD]]s, and [[hard disk drive#Error rates and handling|hard disk drives]]. Other examples of classical block codes include [[Golay code (disambiguation)|Golay]], [[BCH code|BCH]], [[Multidimensional parity-check code|Multidimensional parity]], and [[Hamming code]]s.
 
Hamming ECC is commonly used to correct [[ECC memory]] and early SLC [[NAND flash]] memory errors.<ref>[http://www.eetasia.com/ART_8800575062_499486_AN_7549c493.HTM "Hamming codes for NAND flash memory devices"] {{Webarchive|url=https://web.archive.org/web/20160821122453/http://www.eetasia.com/ART_8800575062_499486_AN_7549c493.HTM |date=21 August 2016 }}. EE Times-Asia. Apparently based on [http://www.micron.com/~/media/Documents/Products/Technical%20Note/NAND%20Flash/tn2908_NAND_hamming_ECC_code.pdf "Micron Technical Note TN-29-08: Hamming Codes for NAND Flash Memory Devices"] {{Webarchive|url=https://web.archive.org/web/20170829073235/http://www.micron.com/~/media/Documents/Products/Technical%20Note/NAND%20Flash/tn2908_NAND_hamming_ECC_code.pdf |date=29 August 2017 }}. 2005. Both say: "The Hamming algorithm is an industry-accepted method for error detection and correction in many SLC NAND flash-based applications."</ref>
This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliable [[single-level cell]] (SLC) NAND.
Denser [[multi-level cell]] (MLC) NAND may use multi-bit correcting ECC such as [[BCH orcode|BCH]], Reed–Solomon, or [[Low-density parity-check code|LDPC]].<ref name="spansion">{{cite web|url=http://www.spansion.com/Support/Application%20Notes/Types_of_ECC_Used_on_Flash_AN.pdf|title =What Types of ECC Should Be Used on Flash Memory?|publisher=Spansion|format=Application note|year=2011|quote=Both Reed–Solomon algorithm and BCH algorithm are common ECC choices for MLC NAND flash. ... Hamming based block codes are the most commonly used ECC for SLC.... both Reed–Solomon and BCH are able to handle multiple errors and are widely used on MLC flash.}}</ref><ref>{{cite web|author=Jim Cooke|url=https://cushychicken.github.io/assets/cooke_inconvenient_truths.pdf |title=The Inconvenient Truths of NAND Flash Memory|date=August 2007|page=28|quote=For SLC, a code with a correction threshold of 1 is sufficient. t=4 required ... for MLC.}}</ref><ref>https://www.thinkmind.org/articles/icons_2015_5_20_40055.pdf {{Bare URL PDF|date=July 2025}}</ref> NOR Flashflash typically does not use any error correction.<ref name="spansion"/>
 
Classical block codes are usually decoded using '''hard-decision''' algorithms,<ref>{{cite journal |author-last1=Baldi |author-first1=M. |author-last2=Chiaraluce |author-first2=F. |title=A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions |journal=[[International Journal of Digital Multimedia Broadcasting]] |volume=2008 |pages=1–12 |date=2008 |doi=10.1155/2008/957846 |doi-access=free }}</ref> which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using '''soft-decision''' algorithms like the Viterbi, MAP or [[BCJR algorithm|BCJR]] algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Line 53 ⟶ 89:
In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such as [[LDPC codes]] lack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates.
 
Most [[forward error correction]] codes correct only bit-flips, but not bit-insertions or bit-deletions.
In this setting, the [[Hamming distance]] is the appropriate way to measure the [[bit error rate]].
A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes.
Line 67 ⟶ 103:
One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:<ref name="shannon paper">{{cite journal|first=C. E.|last=Shannon|title=A mathematical theory of communication|journal=[[Bell System Technical Journal]]|volume=27|issue=3–4|pages=379–423 & 623–656|date=1948|url=http://www.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf|doi=10.1002/j.1538-7305.1948.tb01338.x|hdl=11858/00-001M-0000-002C-4314-2|hdl-access=free}}</ref> His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
 
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.<ref>{{Cite conference |title=Optimizing the code rate for achieving energy-efficient wireless communications |first1=F. |last1=Rosas |first2=G. |last2=Brante |first3=R. D. |last3=Souza |first4=C. |last4=Oberli |url=https://ieeexplore.ieee.org/document/6952166 |date=2014 |pages=775–780 |doi=10.1109/WCNC.2014.6952166 |isbn=978-1-4799-3083-8 |book-title=Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC)}}</ref>
 
==Concatenated ECC codes for improved performance==
Line 99 ⟶ 135:
[[Locally decodable code]]s are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. [[Locally testable code]]s are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
 
Not all locally decodable codes (LDCs) are locally testable codes (LTCs)<ref name="decNotTest">{{cite web |last1=Kaufman |first1=Tali |author1-link=Tali Kaufman |last2=Viderman |first2=Michael |title=Locally Testable vs. Locally Decodable Codes |url=http://eccc.hpi-web.de/report/2010/130/revision/1/download/}}</ref> neither locally correctable codes (LCCs),<ref>{{Cite web |last=Brubaker |first=Ben |date=2024-01-09 |title='Magical' Error Correction Scheme Proved Inherently Inefficient |url=https://www.quantamagazine.org/magical-error-correction-scheme-proved-inherently-inefficient-20240109/ |access-date=2024-01-09 |website=Quanta Magazine |language=en}}</ref> q-query LCCs are bounded exponentially<ref>{{Cite arXiv |last1=Kothari |first1=Pravesh K. |last2=Manohar |first2=Peter |date=2023 |title=An Exponential Lower Bound for Linear 3-Query Locally Correctable Codes |class=cs.CC |eprint=2311.00558}}</ref><ref>{{Cite book |last1=Kerenidis |first1=Iordanis |last2=de Wolf |first2=Ronald |chapter=Exponential lower bound for 2-query locally decodable codes via a quantum argument |date=2003-06-09 |title=Proceedings of the thirty-fifth annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/780542.780560 |language=en |publisher=ACM |pages=106–115 |doi=10.1145/780542.780560 |arxiv=quant-ph/0208062 |isbn=978-1-58113-674-6|s2cid=10585919 }}</ref> while LDCs can have [[Subexponential time|subexponential]] lengths.<ref>{{Cite journal |last=Yekhanin |first=Sergey |date=February 2008 |title=Towards 3-query locally decodable codes of subexponential length |url=https://dl.acm.org/doi/10.1145/1326554.1326555 |journal=Journal of the ACM |language=en |volume=55 |issue=1 |pages=1–16 |doi=10.1145/1326554.1326555 |s2cid=14617710 |issn=0004-5411|url-access=subscription }}</ref><ref>{{Cite book |last=Efremenko |first=Klim |chapter=3-query locally decodable codes of subexponential length |date=2009-05-31 |title=Proceedings of the forty-first annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/1536414.1536422 |journal=[[Journal of the ACM]] |language=en |publisher=ACM |pages=39–44 |doi=10.1145/1536414.1536422 |isbn=978-1-60558-506-2|s2cid=263865692 }}</ref>
Not all testing codes are locally decoding and testing of codes
 
Not all locally decodable codes (LDCs) are locally testable codes (LTCs)<ref name="decNotTest">{{cite web |last1=Kaufman |first1=Tali |author1-link=Tali Kaufman |last2=Viderman |first2=Michael |title=Locally Testable vs. Locally Decodable Codes |url=http://eccc.hpi-web.de/report/2010/130/revision/1/download/}}</ref> neither locally correctable codes (LCCs),<ref>{{Cite web |last=Brubaker |first=Ben |date=2024-01-09 |title='Magical' Error Correction Scheme Proved Inherently Inefficient |url=https://www.quantamagazine.org/magical-error-correction-scheme-proved-inherently-inefficient-20240109/ |access-date=2024-01-09 |website=Quanta Magazine |language=en}}</ref> q-query LCCs are bounded exponentially<ref>{{Cite arXiv |last1=Kothari |first1=Pravesh K. |last2=Manohar |first2=Peter |date=2023 |title=An Exponential Lower Bound for Linear 3-Query Locally Correctable Codes |class=cs.CC |eprint=2311.00558}}</ref><ref>{{Cite book |last1=Kerenidis |first1=Iordanis |last2=de Wolf |first2=Ronald |chapter=Exponential lower bound for 2-query locally decodable codes via a quantum argument |date=2003-06-09 |title=Proceedings of the thirty-fifth annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/780542.780560 |language=en |publisher=ACM |pages=106–115 |doi=10.1145/780542.780560 |arxiv=quant-ph/0208062 |isbn=978-1-58113-674-6|s2cid=10585919 }}</ref> while LDCs can have [[Subexponential time|subexponential]] lengths.<ref>{{Cite journal |last=Yekhanin |first=Sergey |date=February 2008 |title=Towards 3-query locally decodable codes of subexponential length |url=https://dl.acm.org/doi/10.1145/1326554.1326555 |journal=Journal of the ACM |language=en |volume=55 |issue=1 |pages=1–16 |doi=10.1145/1326554.1326555 |s2cid=14617710 |issn=0004-5411}}</ref><ref>{{Cite book |last=Efremenko |first=Klim |chapter=3-query locally decodable codes of subexponential length |date=2009-05-31 |title=Proceedings of the forty-first annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/1536414.1536422 |journal=[[Journal of the ACM]] |language=en |publisher=ACM |pages=39–44 |doi=10.1145/1536414.1536422 |isbn=978-1-60558-506-2|s2cid=263865692 }}</ref>
 
== Interleaving ==
Line 227 ⟶ 261:
* [[Cyclic redundancy check]]s (CRCs) can correct 1-bit errors for messages at most <math>2^{n-1}-1</math> bits long for optimal generator polynomials of degree <math>n</math>, see {{section link|Mathematics of cyclic redundancy checks|Bitfilters}}
* [[Locally Recoverable Codes]]
* [[Message authentication code]]
 
==See also==
Line 250 ⟶ 285:
* [https://web.archive.org/web/20160304050018/http://www.eetasia.com/ARTICLES/2004NOV/A/2004NOV29_MEM_AN10.PDF?SOURCES=DOWNLOAD "Error Correction Code in NAND Flash memories"] 2004-11-29
* [http://perspectives.mvdirona.com/2012/02/observations-on-errors-corrections-trust-of-dependent-systems/ Observations on Errors, Corrections, & Trust of Dependent Systems], by James Hamilton, 2012-02-26
* ''[[Sphere Packings, Lattices and Groups]],'' Byby J. H. Conway, Neil James Alexander Sloane, [[Springer Science & Business Media]], 2013-03-09 – Mathematics – 682 pages.
 
==External links==