Content deleted Content added
Restored revision 1076823443 by Willondon (talk): Remove WP:CITESPAM and by WP:SOCK |
m push mutual information link up, fix cite journal to remove warning |
||
Line 10:
== Overview ==
Stated by [[Claude Shannon]] in 1948, the theorem describes the maximum possible efficiency of [[error-correcting code|error-correcting methods]] versus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications and [[data storage device|data storage]]. This theorem is of foundational importance to the modern field of [[information theory]]. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is due to [[Amiel Feinstein]]<ref>{{
The Shannon theorem states that given a noisy channel with [[channel capacity]] ''C'' and information transmitted at a rate ''R'', then if <math>R < C</math> there exist [[code]]s that allow the [[probability of error]] at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, ''C''.
Line 33:
'''Theorem''' (Shannon, 1948):
: 1. For every discrete memoryless channel, the [[channel capacity]], defined in terms of the [[mutual information]] <math>I(X; Y)</math> as
:: <math>\ C = \sup_{p_X} I(X;Y)</math><ref>For a description of the "sup" function, see [[Supremum]]</ref>
Line 114:
Suppose a code of <math>2^{nR}</math> codewords. Let W be drawn uniformly over this set as an index. Let <math>X^n</math> and <math>Y^n</math> be the transmitted codewords and received codewords, respectively.
# <math>nR = H(W) = H(W|Y^n) + I(W;Y^n)</math> using identities involving entropy and
# <math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
# <math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
|