Content deleted Content added
→Averaging noise to reduce errors: en-dashes are spaced, em-dashes are not |
Ira Leviton (talk | contribs) m Deleted 'interestingly' - see Wikipedia:Manual_of_Style/Words_to_watch#Editorializing. |
||
Line 140:
The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the rate between the number of information bits and the total number of bits (i.e. information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code.
One interesting question is the following: how efficient in terms of information transfer can be a ECC that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:<ref name="shannon paper">C. E. Shannon: ''A mathematical theory of communication.'' Bell System Technical Journal, vol. 27, pp. 379–423 and 623–656, July and October 1948</ref>. His proof, unfortunately, relies on Gaussian random coding, which is not suitable of real-world applications. This upper bound given by Shannon's work set up a long journey in designing ECCs that can go close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
|