Content deleted Content added
m →Converse for discrete memoryless channels: strong converse |
|||
Line 98:
Finally, given that the average codebook is shown to be "good," we know that there exists a codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel.
====
Suppose a code of <math>2^{nR}</math> codewords. Let W be drawn uniformly over this set as an index. Let <math>X^n</math> and <math>Y^n</math> be the codewords and received codewords, respectively.
Line 108:
The result of these steps is that <math> P_e^{(n)} \ge 1 - \frac{1}{nR} - \frac{C}{R} </math>. As the block length n goes to infinity, we obtain <math> P_e^{(n)}</math> is bounded away from 0 if R is greater than C - we can only get arbitrarily low rates of error if R is less than C.
==== Strong converse for discrete memoryless channels ====
A strong converse theorem, proven by Wolfowitz in 1957<ref>Robert Gallager. ''Information Theory and Reliable Communication.'' New York: John Wiley and Sons, 1968. ISBN 0-471-29048-3</ref>, states that,
:<math>
P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-n(R-C)}
</math>
for some finite positive constant <math>A</math>. While the weak converse states that the error probability is bounded away from zero as <math>n</math> goes to infinity, the strong converse states that the error goes exponentially to 1. Thus, <math>C</math> is a sharp threshold between perfectly reliable and completely unreliable communication.
== Channel coding theorem for non-stationary memoryless channels==
|