Content deleted Content added
mNo edit summary Tags: Visual edit Mobile edit Mobile web edit |
|||
Line 3:
{{Use mdy dates|date = March 2019}}
'''Low-density parity-check (LDPC)''' codes are a class of [[
Central to the performance of LDPC codes is their adaptability to the iterative [[belief propagation]] decoding algorithm. Under this algorithm, they can be designed to approach theoretical limits ([[Channel capacity|capacities]]) of many channels<ref>{{Cite web |title=Design of capacity-approaching irregular low-density parity-check codes |url=https://ieeexplore.ieee.org/document/910578 |archive-url=http://web.archive.org/web/20240909161749/https://ieeexplore.ieee.org/document/910578/ |archive-date=2024-09-09 |access-date=2024-12-18 |website=ieeexplore.ieee.org |language=en-US}}</ref> at low computation costs.
Theoretically, analysis of LDPC codes focuses on sequences of codes of fixed [[code rate]] and increasing [[block length]]. These sequences are typically tailored to a set of channels. For appropriately designed sequences, the decoding error under belief propagation can often be proven to be vanishingly small (approaches zero with the block length) at rates that are very close to the capacities of the channels. Furthermore, this can be achieved at a complexity that is linear in the block length.
This theoretical performance is made possible using a flexible design method that is based on sparse [[Tanner graph
▲This theoretical performance is made possible using a flexible design method that is based on sparse [[Tanner graph|Tanner graphs]] (specialized [[bipartite graph|bipartite graphs]]).<ref>{{citation |author=Amin Shokrollahi |url=http://www.ics.uci.edu/~welling/teaching/ICS279/LPCD.pdf |title=LDPC Codes: An Introduction |archive-url=https://web.archive.org/web/20170517034849/http://www.ics.uci.edu/~welling/teaching/ICS279/LPCD.pdf |archive-date=2017-05-17}}</ref>
==History==
LDPC codes were originally conceived by [[Robert G. Gallager]] (and are thus also known as Gallager codes). Gallager devised the codes in his doctoral dissertation at the [[Massachusetts Institute of Technology]] in 1960.<ref>{{Cite news |last=Hardesty |first=L. |date=January 21, 2010 |title=Explained: Gallager codes |url=http://web.mit.edu/newsoffice/2010/gallager-codes-0121.html |access-date=August 7, 2013 |journal=MIT News}}</ref><ref name="G1962">{{cite journal |last=Gallager |first=R.G. |date=January 1962 |title=Low density parity check codes |journal=IRE Trans. Inf. Theory |volume=8 |issue=1 |pages=21–28 |doi=10.1109/TIT.1962.1057683 |s2cid=260490814 |hdl=1721.1/11804/32786367-MIT}}</ref> The codes were largely ignored at the time, as their iterative decoding algorithm (despite having linear complexity), was prohibitively computationally expensive for the hardware available.
Renewed interest in the codes emerged following the invention of the closely-related [[turbo code]]s (1993), whose similarly iterative decoding algorithm outperformed other codes used at that time. LDPC codes were subsequently rediscovered in 1996.<ref name="MacKay96">[[David J.C. MacKay]] and Radford M. Neal, "Near Shannon Limit Performance of Low Density Parity Check Codes," Electronics Letters, July 1996</ref> Initial industry preference for LDPC codes over turbo codes stemmed from patent-related constraints on the latter.<ref name="Closing">{{cite journal |author=Erico Guizzo |date=Mar 1, 2004 |title=CLOSING IN ON THE PERFECT CODE |url=https://spectrum.ieee.org/closing-in-on-the-perfect-code |url-status=dead |journal=IEEE Spectrum |archive-url=https://web.archive.org/web/20210902170851/https://spectrum.ieee.org/closing-in-on-the-perfect-code |archive-date=September 2, 2021}} "Another advantage, perhaps the biggest of all, is that the LDPC patents have expired, so companies can use them without having to pay for intellectual-property rights."</ref>
Theoretical interest in LDPC codes also follows from their amenability to mathematical analysis. In his dissertation, Gallager showed that LDPC codes achieve the [[Gilbert–Varshamov bound for linear codes]] over binary fields with high probability. Over the [[binary erasure channel]], code sequences were designed at rates arbitrary close to channel capacity, with provably vanishing decoding error probability and linear decoding complexity.<ref>{{Cite web |title=Design of capacity-approaching irregular low-density parity-check codes |url=https://ieeexplore.ieee.org/document/910578 |archive-url=http://web.archive.org/web/20240909161749/https://ieeexplore.ieee.org/document/910578/ |archive-date=2024-09-09 |access-date=2024-12-19 |website=ieeexplore.ieee.org |language=en-US}}</ref>
==Applications==
|