Content deleted Content added
Remove deadlink, DOI still good. |
CE |
||
Line 62:
{{Main|Block code|Convolutional code}}
[[File:BlockCont.png|right|
The two main categories of ECC codes are [[block code]]s and [[convolutional code]]s.
Line 73:
This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliable [[single-level cell]] (SLC) NAND.
Denser [[multi-level cell]] (MLC) NAND requires stronger multi-bit correcting ECC such as BCH or Reed–Solomon.<ref name="spansion">[http://www.spansion.com/Support/Application%20Notes/Types_of_ECC_Used_on_Flash_AN.pdf "What Types of ECC Should Be Used on Flash Memory?"]. (Spansion application note). 2011. says: "Both Reed-Solomon algorithm and BCH algorithm are common ECC choices for MLC NAND flash. ... Hamming based block codes are the most commonly used ECC for SLC.... both Reed-Solomon and BCH are able to handle multiple errors and are widely used on MLC flash."</ref><ref>{{cite web|author=Jim Cooke
Classical block codes are usually decoded using '''hard-decision''' algorithms,<ref>{{cite journal |author-last1=Baldi |author-first1=M. |author-last2=Chiaraluce |author-first2=F. |title=A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions |journal=[[International Journal of Digital Multimedia Broadcasting]] |volume=2008 |pages=1–12 |date=2008 |doi=10.1155/2008/957846 |url=http://www.hindawi.com/journals/ijdmb/2008/957846.html}}</ref> which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using '''soft-decision''' algorithms like the Viterbi, MAP or [[BCJR algorithm|BCJR]] algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Line 85:
A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes.
The [[Levenshtein distance]] is a more appropriate way to measure the bit error rate when using such codes.
<ref>{{cite web |author-last1=Shah |author-first1=Gaurav |author-last2=Molina |author-first2=Andres |author-last3=Blaze |author-first3=Matt |title=Keyboards and covert channels |url=https://www.usenix.org/legacy/event/sec06/tech/full_papers/shah/shah_html/jbug-Usenix06.html |website=
==Code-rate and the tradeoff between reliability and data rate==
Line 93:
The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate.<ref>{{citation |author-first1=David |author-last1=Tse |author-first2=Pramod |author-last2=Viswanath |title=Fundamentals of Wireless Communication |publisher=[[Cambridge University Press]], UK |date=2005}}</ref> In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e. a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection.
One interesting question is the following: how efficient in terms of information transfer can be an ECC that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:<ref name="shannon paper">{{cite journal|first=C. E.
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.<ref>{{Cite conference |title=Optimizing the code rate for achieving energy-efficient wireless communications |first1=
==Concatenated ECC codes for improved performance==
Line 130:
{{redirect|Interleaver|the fiber-optic device|optical interleaver}}
[[File:Interleaving1.png|right|
Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Many [[communication channel]]s are not memoryless: errors typically occur in [[burst error|burst]]s rather than independently. If the number of errors within a [[code word]] exceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a more [[Uniform distribution (continuous)|uniform distribution]] of errors.<ref name="turbo-principles">{{cite book |author-first1=B. |author-last1=Vucetic |author-first2=J. |author-last2=Yuan |title=Turbo codes: principles and applications |publisher=[[Springer Verlag]] |isbn=978-0-7923-7868-6 |date=2000}}</ref> Therefore, interleaving is widely used for [[burst error-correcting code|burst error-correction]].
Line 136:
The analysis of modern iterated codes, like [[turbo code]]s and [[LDPC code]]s, typically assumes an independent distribution of errors.<ref>{{cite journal |author-first1=Michael |author-last1=Luby |author-link1=Michael Luby |author-first2=M. |author-last2=Mitzenmacher |author-first3=A. |author-last3=Shokrollahi |author-first4=D. |author-last4=Spielman |author-first5=V. |author-last5=Stemann |title=Practical Loss-Resilient Codes |journal=Proc. 29th Annual [[Association for Computing Machinery]] (ACM) Symposium on Theory of Computation |date=1997}}</ref> Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.<ref>{{Cite journal |title=Digital Video Broadcast (DVB); Second generation framing structure, channel coding and modulation systems for Broadcasting, Interactive Services, News Gathering and other satellite broadband applications (DVB-S2) |journal=En 302 307 |issue=V1.2.1 |publisher=[[ETSI]] |date=April 2009}}</ref>
For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance.<ref name="turbo-principles"/><ref>{{cite journal
Interleaver designs include:
Line 142:
* convolutional interleavers
* random interleavers (where the interleaver is a known random permutation)
* S-random interleaver (where the interleaver is a known random permutation with the constraint that no input symbols within distance S appear within a distance of S in the output).<ref>{{cite paper|first=S.
*
In multi-[[carrier signal|carrier]] communication systems, interleaving across carriers may be employed to provide frequency [[diversity scheme|diversity]], e.g., to mitigate [[frequency-selective fading]] or narrowband interference.<ref>{{Cite journal |title=Digital Video Broadcast (DVB); Frame structure, channel coding and modulation for a second generation digital terrestrial television broadcasting system (DVB-T2) |journal=En 302 755 |issue=V1.1.1 |publisher=[[ETSI]] |date=September 2009}}</ref>
Line 183:
=== Disadvantages of interleaving ===
Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded.<ref>{{cite web |title=Explaining Interleaving|author=Techie|date=June 3, 2010|
== Software for error-correcting codes ==
|