Content deleted Content added
Citation bot (talk | contribs) Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 843/1032 |
|||
(3 intermediate revisions by 3 users not shown) | |||
Line 81:
This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliable [[single-level cell]] (SLC) NAND.
Denser [[multi-level cell]] (MLC) NAND may use multi-bit correcting ECC such as [[BCH
Classical block codes are usually decoded using '''hard-decision''' algorithms,<ref>{{cite journal |author-last1=Baldi |author-first1=M. |author-last2=Chiaraluce |author-first2=F. |title=A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions |journal=[[International Journal of Digital Multimedia Broadcasting]] |volume=2008 |pages=1–12 |date=2008 |doi=10.1155/2008/957846 |doi-access=free }}</ref> which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using '''soft-decision''' algorithms like the Viterbi, MAP or [[BCJR algorithm|BCJR]] algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Line 103:
One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:<ref name="shannon paper">{{cite journal|first=C. E.|last=Shannon|title=A mathematical theory of communication|journal=[[Bell System Technical Journal]]|volume=27|issue=3–4|pages=379–423 & 623–656|date=1948|url=http://www.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf|doi=10.1002/j.1538-7305.1948.tb01338.x|hdl=11858/00-001M-0000-002C-4314-2|hdl-access=free}}</ref> His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.<ref>{{Cite conference |title=Optimizing the code rate for achieving energy-efficient wireless communications |first1=F. |last1=Rosas |first2=G. |last2=Brante |first3=R. D. |last3=Souza |first4=C. |last4=Oberli
==Concatenated ECC codes for improved performance==
Line 134:
[[Locally decodable code]]s are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. [[Locally testable code]]s are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
Not all locally decodable codes (LDCs) are locally testable codes (LTCs)<ref name="decNotTest">{{cite web |last1=Kaufman |first1=Tali |author1-link=Tali Kaufman |last2=Viderman |first2=Michael |title=Locally Testable vs. Locally Decodable Codes |url=http://eccc.hpi-web.de/report/2010/130/revision/1/download/}}</ref> neither locally correctable codes (LCCs),<ref>{{Cite web |last=Brubaker |first=Ben |date=2024-01-09 |title='Magical' Error Correction Scheme Proved Inherently Inefficient |url=https://www.quantamagazine.org/magical-error-correction-scheme-proved-inherently-inefficient-20240109/ |access-date=2024-01-09 |website=Quanta Magazine |language=en}}</ref> q-query LCCs are bounded exponentially<ref>{{Cite arXiv |last1=Kothari |first1=Pravesh K. |last2=Manohar |first2=Peter |date=2023 |title=An Exponential Lower Bound for Linear 3-Query Locally Correctable Codes |class=cs.CC |eprint=2311.00558}}</ref><ref>{{Cite book |last1=Kerenidis |first1=Iordanis |last2=de Wolf |first2=Ronald |chapter=Exponential lower bound for 2-query locally decodable codes via a quantum argument |date=2003-06-09 |title=Proceedings of the thirty-fifth annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/780542.780560 |language=en |publisher=ACM |pages=106–115 |doi=10.1145/780542.780560 |arxiv=quant-ph/0208062 |isbn=978-1-58113-674-6|s2cid=10585919 }}</ref> while LDCs can have [[Subexponential time|subexponential]] lengths.<ref>{{Cite journal |last=Yekhanin |first=Sergey |date=February 2008 |title=Towards 3-query locally decodable codes of subexponential length |url=https://dl.acm.org/doi/10.1145/1326554.1326555 |journal=Journal of the ACM |language=en |volume=55 |issue=1 |pages=1–16 |doi=10.1145/1326554.1326555 |s2cid=14617710 |issn=0004-5411|url-access=subscription }}</ref><ref>{{Cite book |last=Efremenko |first=Klim |chapter=3-query locally decodable codes of subexponential length |date=2009-05-31 |title=Proceedings of the forty-first annual ACM symposium on Theory of computing |chapter-url=https://dl.acm.org/doi/10.1145/1536414.1536422 |journal=[[Journal of the ACM]] |language=en |publisher=ACM |pages=39–44 |doi=10.1145/1536414.1536422 |isbn=978-1-60558-506-2|s2cid=263865692 }}</ref>
|