Content deleted Content added
→Outline of Proof: section-stub. the construction is missing! |
Pulkitgrover (talk | contribs) |
||
Line 67:
The result of these steps is that <math> P_e^{(n)} \le 1 - \frac{1}{nR} - \frac{C}{R} </math>. As the block length n goes to infinity, we obtain <math> P_e^{(n)}</math> is bounded away from 0 if R is greater than C - we can only get arbitrarily low rates of error if R is less than C.
== Channel coding theorem for non-stationary memoryless channel==
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
Then the channel capacity is given by
<math>
C=\lim\;\inf\;\;\max_{p^*(X_1),p^*(X_2),...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i)
</math>
where <math>p^*(X_i)</math> is the capacity achieving distribution for the i<i>th</i> channel. That is,
<math>
C=\lim\;\inf\;\;\frac{1}{n}\sum_{i=1}^n C_i
</math>
where <math>C_i</math> is the capacity of the i<i>th</i> channel.
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in [[Asymptotic Equipartition Property]].
The technicality of <math>lim\; inf</math> comes into play when <math>\sum_{i=1}^n C_i</math> doesn't converge.
==References==
|