Content deleted Content added
m replaced: closely-related → closely related (2) |
Citation bot (talk | contribs) Added bibcode. Removed URL that duplicated identifier. Removed access-date with no URL. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 620/990 |
||
Line 5:
'''Low-density parity-check (LDPC)''' codes are a class of [[error correction code]]s which (together with the closely related [[turbo code]]s) have gained prominence in [[coding theory]] and [[information theory]] since the late 1990s. The codes today are widely used in applications ranging from wireless communications to flash-memory storage. Together with turbo codes, they sparked a revolution in coding theory, achieving order-of-magnitude improvements in performance compared to traditional error correction codes.<ref>{{Cite web |title=Turbo Codes Explained: History, Examples, and Applications - IEEE Spectrum |url=https://spectrum.ieee.org/turbo-codes |access-date=2024-12-18 |website=spectrum.ieee.org |language=en}}</ref>
Central to the performance of LDPC codes is their adaptability to the iterative [[belief propagation]] decoding algorithm. Under this algorithm, they can be designed to approach theoretical limits ([[Channel capacity|capacities]]) of many channels<ref>{{Cite journal |title=Design of capacity-approaching irregular low-density parity-check codes
Theoretically, analysis of LDPC codes focuses on sequences of codes of fixed [[code rate]] and increasing [[block length]]. These sequences are typically tailored to a set of channels. For appropriately designed sequences, the decoding error under belief propagation can often be proven to be vanishingly small (approaches zero with the block length) at rates that are very close to the capacities of the channels. Furthermore, this can be achieved at a complexity that is linear in the block length.
Line 25:
|doi=10.1049/el:19961141 |bibcode=1996ElL....32.1645M |url=https://docs.switzernet.com/people/emin-gabrielyan/060708-thesis-ref/papers/MacKay96.pdf}}</ref> Initial industry preference for LDPC codes over turbo codes stemmed from patent-related constraints on the latter.<ref name="Closing">{{cite journal |author=Erico Guizzo |date=Mar 1, 2004 |title=CLOSING IN ON THE PERFECT CODE |url=https://spectrum.ieee.org/closing-in-on-the-perfect-code |url-status=dead |journal=IEEE Spectrum |archive-url=https://web.archive.org/web/20210902170851/https://spectrum.ieee.org/closing-in-on-the-perfect-code |archive-date=September 2, 2021}} "Another advantage, perhaps the biggest of all, is that the LDPC patents have expired, so companies can use them without having to pay for intellectual-property rights."</ref> Over the time that has elapsed since their discovery, advances in LDPC codes have seen them surpass turbo codes in terms of [[error floor]] and performance in the higher [[code rate]] range, leaving turbo codes better suited for the lower code rates only.<ref>[http://deepspace.jpl.nasa.gov/dsndocs/810-005/208/208B.pdf Telemetry Data Decoding, Design Handbook]</ref> Although the fundamental patent for turbo codes has expired (on August 29, 2013),<ref>{{cite patent|country=US|number=5446747|url=https://www.google.com/patents/US5446747}}</ref><ref>{{cite journal |last=Mackenzie |first=D. |date=9 July 2005 |title=Communication speed nears terminal velocity |journal=New Scientist}}</ref> LDPC codes are now still being preferred for their technical merits.
Theoretical interest in LDPC codes also follows from their amenability to mathematical analysis. In his dissertation, Gallager showed that LDPC codes achieve the [[Gilbert–Varshamov bound for linear codes]] over binary fields with high probability. Over the [[binary erasure channel]], code sequences were designed at rates arbitrary close to channel capacity, with provably vanishing decoding error probability and linear decoding complexity.<ref>{{Cite journal |title=Design of capacity-approaching irregular low-density parity-check codes
==Applications==
Line 220:
== Code construction ==
For large block sizes, LDPC codes are commonly constructed by first studying the behaviour of decoders. As the block size tends to infinity, LDPC decoders can be shown to have a noise threshold below which decoding is reliably achieved, and above which decoding is not achieved,<ref name=richardson01>{{cite journal |first1=T.J. |last1=Richardson |first2=M.A. |last2=Shokrollahi |first3=R.L. |last3=Urbanke |title=Design of capacity-approaching irregular low-density parity-check codes |journal=IEEE Transactions on Information Theory |volume=47 |issue=2 |pages=619–637 |date=February 2001 |doi=10.1109/18.910578|bibcode=2001ITIT...47..619R |url=http://infoscience.epfl.ch/record/95795 }}</ref> colloquially referred to as the [[cliff effect]]. This threshold can be optimised by finding the best proportion of arcs from check nodes and arcs from variable nodes. An approximate graphical approach to visualising this threshold is an [[EXIT chart]].{{Citation needed|date=May 2023}}
The construction of a specific LDPC code after this optimization falls into two main types of techniques:{{Citation needed|date=May 2023}}
Line 227:
*Combinatorial approaches
Construction by a pseudo-random approach builds on theoretical results that, for large block size, a random construction gives good decoding performance.<ref name=MacKay96/> In general, pseudorandom codes have complex encoders, but pseudorandom codes with the best decoders can have simple encoders.<ref name=richardson01b>{{cite journal |first1=T.J. |last1=Richardson |first2=R.L. |last2=Urbanke |title=Efficient encoding of low-density parity-check codes |journal=IEEE Transactions on Information Theory |volume=47 |issue=2 |pages=638–656 |date=February 2001 |doi=10.1109/18.910579 |bibcode=2001ITIT...47..638R |url=http://infoscience.epfl.ch/record/95793 }}</ref> Various constraints are often applied to help ensure that the desired properties expected at the theoretical limit of infinite block size occur at a finite block size.{{Citation needed|date=May 2023}}
Combinatorial approaches can be used to optimize the properties of small block-size LDPC codes or to create codes with simple encoders.{{Citation needed|date=May 2023}}
Line 238:
</ref>
Yet another way of constructing LDPC codes is to use [[finite geometry|finite geometries]]. This method was proposed by Y. Kou ''et al.'' in 2001.<ref name=Kou1>{{cite journal |first1=Y. |last1=Kou |first2=S. |last2=Lin |first3=M.P.C. |last3=Fossorier |title=Low-density parity-check codes based on finite geometries: a rediscovery and new results |journal=IEEE Transactions on Information Theory |volume=47 |issue=7 |pages=2711–36 |date=November 2001 |doi=10.1109/18.959255 |bibcode=2001ITIT...47.2711K |citeseerx=10.1.1.100.3023 }}</ref>
== Compared to turbo codes ==
|