Content deleted Content added
No edit summary |
No edit summary |
||
Line 185:
:::::: Your threshold for sparsity seems to be based on the number of ones divided by the overall number of elements (''n''.''k''). This sounds fair enough, but wouldn't this classify most codes as sparse (e.g. Hamming)? [[User:Oli Filth|Oli Filth]]<sup>([[User talk:Oli Filth|talk]]<nowiki>|</nowiki>[[Special:Contributions/Oli_Filth|contribs]])</sup> 19:01, 8 June 2009 (UTC)
::::::: If I didn't mess things up in my head, then for a hypercube of dimension ''d'' and a ''k'' element word you would have sqrt_d(k) ones per column in the parity-check matrix, and consequently the percentage of ones in the matrix approaches zero. I don't see this is true for most linear block codes, in fact RS codes have pretty dense parity-check matrices... but I might be dumb right now.
::::::: BTW, I'm just talking about the "parity part" of the matrix... so, number of ones in the parity part divided by (n-k)*k.
::::::: [[User:Nageh|Nageh]] ([[User talk:Nageh|talk]]) 20:59, 8 June 2009 (UTC)
|