Noisy-channel coding theorem: Difference between revisions

Content deleted Content added
Bornhj (talk | contribs)
rm inappropriate stub tag
Achievability for discrete memoryless channels: bracketted formulea for proper display
Line 64:
:::<math>2^{-n(H(Y) + \epsilon)} \le p(Y_1^n) \le 2^{-n(H(Y)-\epsilon)}</math>
 
:::<math>{2^{-n(H(X,Y) + \epsilon)} }\le p(X_1^n, Y_1^n) \le 2^{-n(H(X,Y) -\epsilon)} \}</math>
 
We say that two sequences <math>{X_1^n}</math> and <math>Y_1^n</math> are ''jointly typical'' if they lie in the jointly typical set defined above.
 
'''Steps'''
Line 86:
*From the joint AEP, we know that the probability that no jointly typical X exists goes to 0 as n grows large. We can bound this error probability by <math>\epsilon</math>.
 
*Also from the joint AEP, we know the probability that a particular <math>X_1^{n(i)}</math> and the <math>Y_1^n</math> resulting from W = 1 are jointly typical is <math>\le 2^{-n(I(X;Y) - 3\epsilon)}</math>.
 
Define: <math>E_i = \{(X_1^n(i), Y_1^n) \in A_\epsilon^{(n)}\}, i = 1, 2, ..., 2^{nR}</math>