Content deleted Content added
added a short description of one style of proof of the coding theorem |
|||
Line 42:
(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11)
== Outline of Proof ==
As with several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
The following outlines are only one set of many different styles available for study in information theory texts.
====Achievability for discrete memoryless channels====
==== Converse for discrete memoryless channels====
Suppose a code of <math>2^{nR}</math> codewords. Let W be drawn uniformly over this set as an index. Let <math>X^n</math> and <math>Y^n</math> be the codewords and received codewords, respectively.
:1) <math>nR = H(W) = H(W|Y^n) + I(W;Y^n)\;</math> using identities involving entropy and mutual information
:2) <math>\le H(W|Y^n) + I(X^n(W);Y^n)</math> since X is a function of W
:3) <math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
:4) <math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
The result of these steps is that <math> P_e^{(n)} \le 1 - \frac{1}{nR} - \frac{C}{R} </math>. As the block length n goes to infinity, we obtain <math> P_e^{(n)}</math> is bounded away from 0 if R is greater than C - we can only get arbitrarily low rates of error if R is less than C.
==References==
Line 53 ⟶ 77:
* [[Shannon-Hartley theorem]]
* [[Turbo code]]
* [[Fano's Inequality]]
==External links==
|