Content deleted Content added
No edit summary |
|||
Line 184:
:::::: I think the number of ones per column is <math>\lim_{k\rightarrow \infty} \frac{dk + 2\sqrt{k}}{k + 2\sqrt{k}} = d</math>. (Note that I'm tired, so I could easily have screwed up...)
:::::: Your threshold for sparsity seems to be based on the number of ones divided by the overall number of elements (''n''.''k''). This sounds fair enough, but wouldn't this classify most codes as sparse (e.g. Hamming)? [[User:Oli Filth|Oli Filth]]<sup>([[User talk:Oli Filth|talk]]<nowiki>|</nowiki>[[Special:Contributions/Oli_Filth|contribs]])</sup> 19:01, 8 June 2009 (UTC)
::::::: If I didn't mess things up in my head, then for a hypercube of dimension ''d'' and a ''k'' element word you would have sqrt_d(k) ones per column in the parity-check matrix, and consequently the percentage of ones in the matrix approaches zero. I don't see this is true for most linear block codes, in fact RS codes have pretty dense parity-check matrices... but I might be dumb right now.
::::::: [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 20:59, 8 June 2009 (UTC)
:::: I think that LDPC are also characterized by there ability to be efficiently decoded by a low complexity iterative decoding algorithm. [[User:Cunchem|Cunchem]] ([[User talk:Cunchem|talk]]) 14:54, 8 June 2009 (UTC)
::::: Not really. In the original Gallager codes you had to do a full matrix multiplication. The idea was just that there was algorithms doing this faster for sparse matrices than general algorithms, but it was definitely not low complexity.
:::::: In the original article of Gallager<ref> http://www.inference.phy.cam.ac.uk/mackay/gallager/papers/</ref>, after analysing the performances of LDPC under Maximum Likelihood decoding, he introduces 2 algorithms in part 4 that are in fact iterative algorithm. He says that these algorithms are suboptimal but have a low complexity. Maybe you were refering at the encoding when speaking about matrix multiplication. [[User:Cunchem|Cunchem]] ([[User talk:Cunchem|talk]]) 19:54, 8 June 2009 (UTC)
::::::: It seems my memories were blurry... thanks for correcting me.
::::::: [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 20:59, 8 June 2009 (UTC)
:::: My (rather limited) understanding of LDPC is that the bipartite graph viewpoint and the parity-matrix viewpoint are equivalent (it's just a mapping between codeword bits and parity constraints), therefore ''all'' linear block codes have one. In the case of LDPC, however, the sparsity of the parity matrix leads to a sparse graph, which in turn leads to an efficient decoder. So I'm still not convinced that MDPC is especially relevant, and would still be reluctant to include this without some authoritative reference. [[User:Oli Filth|Oli Filth]]<sup>([[User talk:Oli Filth|talk]]<nowiki>|</nowiki>[[Special:Contributions/Oli_Filth|contribs]])</sup> 15:31, 8 June 2009 (UTC)
::::: All linear block codes may be modeled as bipartite graphs, sure. But the point is: how do you construct the codes? If you take the graph approach, this is what would be called LDPC codes. Other codes, e.g. RS codes, take other approaches, such as polynomial interpolation.
|