Talk:Low-density parity-check code: Difference between revisions

Content deleted Content added
No edit summary
Implementing WP:PIQA (Task 26)
 
(46 intermediate revisions by 23 users not shown)
Line 1:
{{oldpeerreview|archive=1}}
{{Tel Project}}
{{WikiProject banner shell|class=C|
{{WikiProject Telecommunications }}
}}
{{Expert-talk}}
For a person who has not studied information theory, the introduction chapter says absolutely nothing. It's some state of the art code, rrright. For what/Where/Why is it used? I think the introduction should answer these questions first. Then it's okay to jump into the technics. Besides, what's Forney's factor graph notation anyway? Maybe an article about it should be created, or at least [[Forney]] linked somewhere. (Elsewhere than to [[Forney, Texas]]). --[[User:ZeroOne|ZeroOne]] 19:52, 12 Oct 2004 (UTC)
Line 31 ⟶ 34:
 
Is not LDPC used in 802.11n? [[Special:Contributions/192.114.175.2|192.114.175.2]] ([[User talk:192.114.175.2|talk]]) 11:57, 5 December 2007 (UTC)
 
Yes its specifiyed in the HT of 802.11n wich codelength between 648 and 1944 bits. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/131.188.138.93|131.188.138.93]] ([[User talk:131.188.138.93|talk]]) 13:19, 13 August 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
== Operational use ==
 
I think some of the details need to be better written here. The variable 'k' is not even defined. Also, the sentence "or there must be an even number of odd values" is confusing. Instead of writing that, should write something straight-forward, such as - the "Exclusive OR" of all the bits linked by the lines will be logical 0. Also, as if by 'magic', the matrices "P" and "I" just pop out of nowhere, and nobody has any idea what they are. [[User:KorgBoy|KorgBoy]] ([[User talk:KorgBoy|talk]]) 08:10, 25 May 2017 (UTC)
 
I think H should be reduced to -P^T | I_{k} and not (n-k). Because the number of rows of H is actually k. It works for this example since n-k = k here, but for general n and k, it will not work. <!-- Template:Unsigned --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Swaprava|Swaprava]] ([[User talk:Swaprava#top|talk]] • [[Special:Contributions/Swaprava|contribs]]) 17:53, 5 January 2020 (UTC)</small> <!--Autosigned by SineBot-->
 
Similarly, G will be starting with I_{n-k} and not k. LDPC codes encode n-k bits into n bits, right? <!-- Template:Unsigned --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Swaprava|Swaprava]] ([[User talk:Swaprava#top|talk]] • [[Special:Contributions/Swaprava|contribs]]) 17:56, 5 January 2020 (UTC)</small> <!--Autosigned by SineBot-->
 
== Forgotten? ==
Line 37 ⟶ 50:
 
In the paper that effectively reintroducted LDPC codes (MacKay and Neal "Near Shannon Limit Performance of Low Density Parity Check Codes") there is a brief mention of this topic. The suggestion is that a slightly related family of codes (concatenated codes) were believed to be better and hence LDPC codes were ignored. This paper cites personal communication from Gallager on this topic. [[User:Edratzer|Edratzer]] 17:25, 17 September 2006 (UTC)
 
LDPC codes were actually rediscovered one year before MacKay and Neal, by Wiberg, Loeliger, and Kötter in "Codes and Iterated Decoding on General Graphs", published in 1995. This was triggered by the publication of Turbo codes in 1993, which sparked the new interest in iterative decoding algorithms. This page should be updated with a reference to that paper. [[User:Nicwi|Nicwi]] ([[User talk:Nicwi|talk]]) 21:21, 6 March 2016 (UTC)
 
== Links ==
Line 81 ⟶ 96:
Similar to above, this section should write something about decoding algorithms, such as message passing etc.
This section states that decoding an LDPC code is NP-complete. Well, this is certainly not true for a [[binary erasure channel]], and ironically the example given discusses just that channel.
 
Does the subsection on lookup-table decoding need a citation? Or does anyone know of some elaboration on the method described? I'm not sure how a 1024-bit table would help decode an LDPC with a 1024-bit block size. (I may be misreading that section; it could probably be cleaned up anyway; e.g. "very high iterations" could be "many iterations" or some more fluent wording.) <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/173.73.92.75|173.73.92.75]] ([[User talk:173.73.92.75|talk]]) 03:30, 15 February 2014 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
The section on lookup-table absolutely needs a citation. How can any machine store a lookup table for arbitrary 1024 bit strings, much less a microcontroller? -EdwinOlson <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/64.94.31.206|64.94.31.206]] ([[User talk:64.94.31.206|talk]]) 23:13, 24 June 2014 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
=== Other things ===
Line 91 ⟶ 110:
Sorry if points are rather rough, it's late, I'm tired, and I am not an expert on coding.
Anyway, thanks for giving a check. G'night ;) [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 22:39, 22 May 2009 (UTC)
 
==Capacity achieving?==
 
The third sentence of the introduction ("LDPC was the first code to allow data transmission rates close to the theoretical maximum") does not really bring any valuable information. It all depends too much on what is meant by "close to the theoretical maximum". Either one should give a precise value like "within XX dB from theoretical maximum" with a corresponding reference, or one should lose the sentence altogether. [[User:Kaioshin82|Kaioshin82]] ([[User talk:Kaioshin82|talk]]) 13:19, 30 July 2009 (UTC)
 
:I've tried to clarify the start of the article. I'm guessing the original author is referring to whether or not LDPC codes can achieve capacity. [[User:Edratzer|Edratzer]] ([[User talk:Edratzer|talk]]) 08:08, 5 August 2009 (UTC)
 
::Thanks! That's a much better introduction. [[User:Pfagerburg|Pfagerburg]] ([[User talk:Pfagerburg|talk]]) 01:28, 7 August 2009 (UTC)
 
== comparison to multidimensional parity check code ==
Line 182 ⟶ 209:
:::: does "sqrt_d(k) ones per column" makes a matrix sparse?
::::: since sqrt(k)/k approaches 0 with k->infinity, I would say, yes, they are sparse... but that's just my opinion.
:::::: I think the number of ones per column is <math>\lim_{k\rightarrow \infty} \frac{dk + 2\sqrt{k}}{k + 2\sqrt{k}} = d</math>. (Note that I'm tired, so I could easily have screwed up...)
:::::: Your threshold for sparsity seems to be based on the number of ones divided by the overall number of elements (''n''.''k''). This sounds fair enough, but wouldn't this classify most codes as sparse (e.g. Hamming)? [[User:Oli Filth|Oli Filth]]<sup>([[User talk:Oli Filth|talk]]<nowiki>|</nowiki>[[Special:Contributions/Oli_Filth|contribs]])</sup> 19:01, 8 June 2009 (UTC)
::::::: If I didn't mess things up in my head, then for a hypercube of dimension ''d'' and a ''k'' element word you would have sqrt_d(k) ones per column in the parity-check matrix, and consequently the percentage of ones in the matrix approaches zero. I don't see this is true for most linear block codes, in fact RS codes have pretty dense parity-check matrices... but I might be dumb right now.
::::::: BTW, I'm just talking about the "parity part" of the matrix... so, number of ones in the parity part divided by (n-k)*k.
::::::: [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 20:59, 8 June 2009 (UTC)
 
:::: I think that LDPC are also characterized by there ability to be efficiently decoded by a low complexity iterative decoding algorithm. [[User:Cunchem|Cunchem]] ([[User talk:Cunchem|talk]]) 14:54, 8 June 2009 (UTC)
::::: Not really. In the original Gallager codes you had to do a full matrix multiplication. The idea was just that there was algorithms doing this faster for sparse matrices than general algorithms, but it was definitely not low complexity.
:::::: In the original article of Gallager<ref> http://www.inference.phy.cam.ac.uk/mackay/gallager/papers/</ref>, after analysing the performances of LDPC under Maximum Likelihood decoding, he introduces 2 algorithms in part 4 that are in fact iterative algorithm. He says that these algorithms are suboptimal but have a low complexity. Maybe you were refering at the encoding when speaking about matrix multiplication. [[User:Cunchem|Cunchem]] ([[User talk:Cunchem|talk]]) 19:54, 8 June 2009 (UTC)
 
::::::: It seems my memories were blurry... thanks for correcting me.
::::::: [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 20:59, 8 June 2009 (UTC)
:::: My (rather limited) understanding of LDPC is that the bipartite graph viewpoint and the parity-matrix viewpoint are equivalent (it's just a mapping between codeword bits and parity constraints), therefore ''all'' linear block codes have one. In the case of LDPC, however, the sparsity of the parity matrix leads to a sparse graph, which in turn leads to an efficient decoder. So I'm still not convinced that MDPC is especially relevant, and would still be reluctant to include this without some authoritative reference. [[User:Oli Filth|Oli Filth]]<sup>([[User talk:Oli Filth|talk]]<nowiki>|</nowiki>[[Special:Contributions/Oli_Filth|contribs]])</sup> 15:31, 8 June 2009 (UTC)
::::: All linear block codes may be modeled as bipartite graphs, sure. But the point is: how do you construct the codes? If you take the graph approach, this is what would be called LDPC codes. Other codes, e.g. RS codes, take other approaches, such as polynomial interpolation.
::::: I agree with you regarding MDPC. As I said before, it seems very stretched to view MDPC codes as constructed via bipartite graphs, since the original intention was likely simply to compute single parity bits over rows and columns in a hypercube. I suggest leaving mentioning MDPC codes out, or otherwise, if there is an appropriate place (subjective) in the article, then it should only say that "MDPC codes ''may'' be viewed as simple regular LDPC codes".
::::: Edit: I rather vote for removing the reference to MDPC codes.
::::: [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 16:59, 8 June 2009 (UTC)
 
:::::: I see your point on why LDPC construction is special, and I think we agree that it means that MDPC is no more relevant than any other code construction. [[User:Oli Filth|Oli Filth]]<sup>([[User talk:Oli Filth|talk]]<nowiki>|</nowiki>[[Special:Contributions/Oli_Filth|contribs]])</sup> 18:46, 8 June 2009 (UTC)
 
:Regarding randomness, I do not recall randomness being a central criterion in the definition of LDPC codes; in fact, you could very well construct low-density parity check matrices algebraically and deterministically, it's just that randomized constructions turn out to be superior.
:[[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 09:14, 8 June 2009 (UTC)
 
== @ [[User:78.42.25.227]] ==
I am confused by your recent edit... In the parity-check matrix, there will be either 1's or 0's. The null space of the matrix defines the graph. I don't see why there should be "probabilities" in the matrix for syndrome computation. And syndrome computation tells you normally whether decoding was correct (syndrome is 0) or not (not 0). Please justify your edit (or anyone else), either on the talk page or as a reference, or I will revert your edit. Thanks, cheers, [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 18:14, 20 October 2009 (UTC)
 
:Oups I just reverted the change before reading your post ... Maybe it is a way to represent or implement the iterative decoding, and in this case it should be justified. <small><span class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Cunchem|Cunchem]] ([[User talk:Cunchem|talk]] • [[Special:Contributions/Cunchem|contribs]]) 07:39, 21 October 2009 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
 
== Probably trivial in the context... ==
 
Trivial, I'm sure, but...
 
In the introduction to the article, it refers to Gallager developing the LDPC concept in his doctoral dissertation at MIT in 1960.
 
However, in the History section, it states that LDPC codes were first developed by Gallager in 1963. Reference 6 states 1963. Which is true?
 
David Kelsey <small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/101.178.141.19|101.178.141.19]] ([[User talk:101.178.141.19|talk]]) 00:54, 27 September 2015 (UTC)</small><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
:"The story of LDPC codes and iterative decoding begins in Gallager’s remarkable Ph.D. thesis completed in 1960, and later published in 1963"
:[R. G. Gallager. Low-Density Parity-Check Codes. MIT Press, 1963]
:[[User:Markovisch|Markovisch]] ([[User talk:Markovisch#top|talk]] • [[Special:Contributions/Markovisch|contribs]]) 01:02, 27 March 2017 (UTC)
 
== External links modified ==
 
Hello fellow Wikipedians,
 
I have just modified 5 external links on [[Low-density parity-check code]]. Please take a moment to review [[special:diff/819110467|my edit]]. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit [[User:Cyberpower678/FaQs#InternetArchiveBot|this simple FaQ]] for additional information. I made the following changes:
*Added archive https://web.archive.org/web/20061008053124/http://www.ieeevtc.org/vtc2003fall/2003panelsessions/llee.pdf to http://www.ieeevtc.org/vtc2003fall/2003panelsessions/llee.pdf
*Added archive https://web.archive.org/web/20091213012126/http://blog.ds2.es/ds2blog/2009/10/ieee-communications-magazine-paper-on-ghn.html to http://blog.ds2.es/ds2blog/2009/10/ieee-communications-magazine-paper-on-ghn.html
*Added archive https://web.archive.org/web/20130526003113/http://www.atsc.org/cms/pdf/pt2/Wells_ATSC_paper_on_T2.pdf to http://www.atsc.org/cms/pdf/pt2/Wells_ATSC_paper_on_T2.pdf
*Added archive https://web.archive.org/web/20090926035703/http://www.mathworks.com/access/helpdesk/help/toolbox/comm/ref/fec.ldpcenc.html to http://www.mathworks.com/access/helpdesk/help/toolbox/comm/ref/fec.ldpcenc.html
*Added archive https://web.archive.org/web/20090217170744/http://www.mathworks.com/access/helpdesk/help/toolbox/comm/ref/fec.ldpcdec.html to http://www.mathworks.com/access/helpdesk/help/toolbox/comm/ref/fec.ldpcdec.html
 
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
 
{{sourcecheck|checked=false|needhelp=}}
 
Cheers.—[[User:InternetArchiveBot|'''<span style="color:darkgrey;font-family:monospace">InternetArchiveBot</span>''']] <span style="color:green;font-family:Rockwell">([[User talk:InternetArchiveBot|Report bug]])</span> 14:14, 7 January 2018 (UTC)
 
== Decoding ==
 
The decoding section directly jumps to technicalities and state-of-the-art improvements while never really introducing belief propagation and bit flipping algorithms which I believe are not that hard to understand.
 
The part about seeing an LDPC as independent SPC codes and using SOVA or BCJR is confusing. I believe it tries to draw an analogy between turbo codes or convolutional codes and LDPC codes. It currently has no reference and the only references I could find on applying those algorithms to LDPC were in really specific constructions with structured codes or concatenations.
 
The part about lookup table has no reference either. I have no doubt that a bit flipping algorithm can run on a low power CPU and that some computations can be avoided by using a lookup table (for the check nodes computations for example). But I am not sure that this is what the author meant and if this is what was meant I don't see how it is relevant to detail the lookup table and the usage of the FIFO.
 
[[User:Bpcov|Bpcov]] ([[User talk:Bpcov|talk]]) 19:44, 28 July 2018 (UTC)