Noisy-channel coding theorem: Difference between revisions

Content deleted Content added
m Hartley
 
(241 intermediate revisions by more than 100 users not shown)
Line 1:
{{short description|Limit on data transfer rate}}
In [[information theory]], the '''noisy-channel coding theorem''' establishes that however contaminated with noise interference a communication channel may be, it is possible to communicate digital data ([[information]]) error-free up to a given maximum rate through the channel. This surprising result, sometimes called the '''fundamental theorem of information theory''', or just '''Shannon's theorem''', was first presented by [[Claude Shannon]] in [[1948]].
 
{{redirect|Shannon's theorem|text=Shannon's name is also associated with the [[sampling theorem]]}}
The '''Shannon limit''' or '''Shannon capacity''' of a communications channel is the theoretical maximum information transfer rate of the channel, for a particular noise level.
 
In [[information theory]], the '''noisy-channel coding theorem''' (sometimes '''Shannon's theorem''' or '''Shannon's limit'''), establishes that for any given degree of noise contamination of a communication channel, it is possible (in theory) to communicate discrete data (digital [[information]]) nearly error-free up to a computable maximum rate through the channel. This result was presented by [[Claude Shannon]] in 1948 and was based in part on earlier work and ideas of [[Harry Nyquist]] and [[Ralph Hartley]].
 
The '''Shannon limit''' or '''Shannon capacity''' of a communication channel refers to the maximum [[Code rate|rate]] of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and [[Warren Weaver]] entitled ''[[The Mathematical Theory of Communication]]'' (1949). This founded the modern discipline of [[information theory]].
== Overview ==
 
ProvedStated by [[Claude Shannon]] in [[1948]], the theorem describes the maximum possible efficiency of [[error-correcting code|error-correcting methods]] versus levels of noise interference and data corruption. The theory doesn't describe ''how to construct'' the error-correcting method, it only tells us how good the ''best possible'' method can be. Shannon's theorem has wide-ranging applications in both communications and [[data storage device|data storage]] applications. This theorem is of foundational importance to the modern field of [[information theory]]. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is given in {{harv|Feinstein|1954}}.
 
The Shannon theorem states that given a noisy channel with information[[channel capacity]] ''C'' and information transmitted at a rate ''R'', then if <math>R < C</math> there exist [[code]]s that allow the [[probability of error]] at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, ''C''.
 
The converse is also important. If <math>R > C</math>, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.
:<math> R < C \,</math>
 
The channel capacity <math>C</math> can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using the [[Shannon–Hartley theorem]].
there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information without error up to a limiting rate, C.
 
Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such as [[Reed–Solomon code]]s and, more recently, [[low-density parity-check code|low-density parity-check]] (LDPC) codes and [[turbo code]]s, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today's [[digital signal processors]], it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045&nbsp;dB of the Shannon limit (for binary [[additive white Gaussian noise]] (AWGN) channels, with very long block lengths).<ref>{{cite journal |author2-link=G. David Forney, Jr. |author4-link=Rüdiger Urbanke |author1=Sae-Young Chung |first2=G. D. |last2=Forney |first3=T.J. |last3=Richardson |first4=R. |last4=Urbank |title=On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit |journal=IEEE Communications Letters |volume=5 |issue=2 |pages=58–60 |date=February 2001 |doi=10.1109/4234.905935 |s2cid=7381972 |url=http://www.josephboutros.org/ldpc_vs_turbo/ldpc_Chung_CLfeb01.pdf}}</ref>
The converse is also important. If
 
== Mathematical statement ==
:<math> R > C \,</math>
 
[[Image:Noisy-channel coding theorem — channel capacity graph.png|thumb|right|300px|Graph showing the proportion of a channel’s capacity (''y''-axis) that can be used for payload based on how noisy the channel is (probability of bit flips; ''x''-axis)]]
an arbitrarily small probability of error is not achievable. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.
 
The basic mathematical model for a communication system is the following:
Simple schemes such as "send the message 3 times and use at best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such as [[Reed-Solomon code]]s and, more recently, [[Turbo code]]s come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. With Turbo codes and the computing power in today's [[digital signal processors]], it is now possible to reach within 1/10 of one [[decibel]] of the Shannon limit.
 
: <math title="Channel model">\xrightarrow[\text{Message}]{W}
== Mathematical statement ==
 
\begin{array}{ |c| }\hline \text{Encoder} \\ f_n \\ \hline\end{array} \xrightarrow[\mathrm{Encoded \atop sequence}]{X^n} \begin{array}{ |c| }\hline \text{Channel} \\ p(y|x) \\ \hline\end{array} \xrightarrow[\mathrm{Received \atop sequence}]{Y^n} \begin{array}{ |c| }\hline \text{Decoder} \\ g_n \\ \hline\end{array} \xrightarrow[\mathrm{Estimated \atop message}]{\hat W}</math>
Theorem (Shannon, 1948):
A '''message''' ''W'' is transmitted through a noisy channel by using encoding and decoding functions. An '''encoder''' maps ''W'' into a pre-defined sequence of channel symbols of length ''n''. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into a '''decoder''' which maps the sequence into an estimate of the message. In this setting, the probability of error is defined as:
 
:: <math> P_e = \text{Pr}\left\{ \hat{W} \neq W \right\}. </math>
:1. For every discrete memoryless channel, the [[channel capacity]]
 
'''Theorem''' (Shannon, 1948):
::<math>C = \max_{P_X} \,I(X;Y)</math>
 
: 1. For every discrete memoryless channel, the [[channel capacity]], defined in terms of the [[mutual information]] <math>I(X; Y)</math> as
:has the following property. For any &epsilon; &gt; 0 and ''R &lt; C'', for large enough ''N'', there exists a code of length ''N'' and rate &ge; R and a decoding algorithm, such that the maximal probability of block error is &le; &epsilon;.
 
:: <math>\ C = \sup_{p_X} I(X;Y)</math><ref>For a description of the "sup" function, see [[Supremum]]</ref>
:2. If a probability of bit error ''p<sub>b</sub>'' is acceptable, rates up to ''R(p<sub>b</sub>)'' are achievable, where
 
: has the following property. For any <math>\epsilon>0</math> and <math>R<C</math>, for large enough <math>N</math>, there exists a code of length <math>N</math> and rate <math>\geq R</math> and a decoding algorithm, such that the maximal probability of block error is <math>\leq \epsilon</math>.
::<math>R(p_b) = \frac{C}{1-H_2(p_b)} .</math>
 
: 2. If a probability of bit error <math>p_b</math> is acceptable, rates up to <math>R(p_b)</math> are achievable, where
:and <math> H_2(p_b)</math> is the ''[[binary entropy function]]''
 
:: <math>H_2R(p_b) =- \left[ p_b \log frac{p_bC} + ({1-H_2(p_b) \log ({1-p_b}) \right].</math>
 
: and <math> H_2(p_b)</math> is the ''[[binary entropy function]]''
:3. For any ''p<sub>b</sub>'', rates greater than ''R(p<sub>b</sub>)'' are not achievable.
 
:: <math>H_2(p_b)=- \left[p_b \log_2 {p_b} + (1-p_b) \log_2 ({1-p_b}) \right]</math>
(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11)
 
: 3. For any <math>p_b</math>, rates greater than <math>R(p_b)</math> are not achievable.
== Outline of Proof ==
{{section-stub}}
As with several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
 
(MacKay (2003), p.&nbsp;162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p.&nbsp;198; Shannon (1948) thm. 11)
 
== Outline of proof ==
 
As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
 
The following outlines are only one set of many different styles available for study in information theory texts.
 
==== Achievability for discrete memoryless channels= ===
 
This particular proof of achievability follows the style of proofs that make use of the [[Asymptoticasymptotic equipartition property]] (AEP). Another style can be found in information theory texts using [[Errorerror Exponentexponent]]s.
 
Both types of proofs make use of a random coding argument where the codebook used across a channel is randomly constructed - this serves to reducemake computationalthe analysis complexitysimpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below the [[Channelchannel capacity]].
 
By an AEP-related argument, given a channel, length <math>n</math> strings of source symbols <math>X_1^{n}</math>, and length <math>n</math> strings of channel outputs <math>Y_1^{n}</math>, we can define a ''jointly typical set'' by the following:
 
: <math>A_\epsilonvarepsilon^{(n)} = \{(x^n, y^n) \in \mathcal X^n \times \mathcal Y^n </math>
 
::: <math>2^{-n(H(X)+\epsilonvarepsilon)} \le p(X_1^n) \le 2^{-n(H(X) - \epsilonvarepsilon)}</math>
 
::: <math>2^{-n(H(Y) + \epsilonvarepsilon)} \le p(Y_1^n) \le 2^{-n(H(Y)-\epsilonvarepsilon)}</math>
 
::: <math>{2^{-n(H(X,Y) + \epsilonvarepsilon)}}\le p(X_1^n, Y_1^n) \le 2^{-n(H(X,Y) -\epsilonvarepsilon)} \}</math>
 
We say that two sequences <math>{X_1^n}</math> and <math>Y_1^n</math> are ''jointly typical'' if they lie in the jointly typical set defined above.
 
'''Steps'''
#In the style of the random coding argument, we randomly generate <math> 2^{nR} </math> codewords of length n from a probability distribution Q.
#This code is revealed to the sender and receiver. It is also assumed both know the transition matrix <math>p(y|x)</math> for the channel being used.
#A message W is chosen according to the uniform distribution on the set of codewords. That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, ..., 2^{nR}</math>.
#The message W is sent across the channel.
#The receiver receives a sequence according to <math>P(y^n|x^n(w))= \prod_{i = 1}^np(y_i|x_i(w))</math>
#Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y. If there are no jointly typical codewords, or if there are more than one, an error is declared. An error also occurs if a decoded codeword doesn't match the original codeword. This is called ''typical set decoding''.
 
# In the style of the random coding argument, we randomly generate <math> 2^{nR} </math> codewords of length n from a probability distribution Q.
# This code is revealed to the sender and receiver. It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
# A message W is chosen according to the uniform distribution on the set of codewords. That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
# The message W is sent across the channel.
# The receiver receives a sequence according to <math>P(y^n|x^n(w))= \prod_{i = 1}^np(y_i|x_i(w))</math>
# Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y. If there are no jointly typical codewords, or if there are more than one, an error is declared. An error also occurs if a decoded codeword does not match the original codeword. This is called ''typical set decoding''.
 
The probability of error of this scheme is divided into two parts:
 
# First, error can occur if no jointly typical X sequences are found for a received Y sequence
# Second, error can occur if an incorrect X sequence is jointly typical with a received Y sequence.
 
* By the randomness of the code construction, we can assume that the average probability of error averaged over all codes does not depend on the index sent. Thus, without loss of generality, we can assume ''W'' = 1.
* From the joint AEP, we know that the probability that no jointly typical X exists goes to 0 as n grows large. We can bound this error probability by <math>\varepsilon</math>.
* Also from the joint AEP, we know the probability that a particular <math>X_1^{n}(i)</math> and the <math>Y_1^n</math> resulting from ''W'' = 1 are jointly typical is <math>\le 2^{-n(I(X;Y) - 3\varepsilon)}</math>.
 
Define: <math>E_i = \{(X_1^n(i), Y_1^n) \in A_\varepsilon^{(n)}\}, i = 1, 2, \dots, 2^{nR}</math>
*From the joint AEP, we know that the probability that no jointly typical X exists goes to 0 as n grows large. We can bound this error probability by <math>\epsilon</math>.
 
*Also from the joint AEP, we know the probability that a particular <math>X_1^{n(i)}</math> and the <math>Y_1^n</math> resulting from W = 1 are jointly typical is <math>\le 2^{-n(I(X;Y) - 3\epsilon)}</math>.
 
Define: <math>E_i = \{(X_1^n(i), Y_1^n) \in A_\epsilon^{(n)}\}, i = 1, 2, ..., 2^{nR}</math>
 
as the event that message i is jointly typical with the sequence received when message 1 is sent.
 
: <math>
<math>P(error) = P(error|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i)</math>
 
\begin{align}
:::<math>\le \epsilon + 2^{-n(I(X;Y)-R-3\epsilon)}</math>
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
& {} \le \varepsilon + 2^{-n(I(X;Y)-R-3\varepsilon)}.
\end{align}
</math>
 
We can observe that as <math>n</math> goes to infinity, if <math>R < I(X;Y)</math> for the channel, the probability of error will go to 0.
 
Finally, given that the average codebook is shown to be "good," we know that there exists a codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel.
 
==== ConverseWeak converse for discrete memoryless channels= ===
 
Suppose a code of <math>2^{nR}</math> codewords. Let W be drawn uniformly over this set as an index. Let <math>X^n</math> and <math>Y^n</math> be the transmitted codewords and received codewords, respectively.
 
# <math>nR = H(W) = H(W|Y^n) + I(W;Y^n)\;</math> using identities involving entropy and mutual information
# <math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
# <math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
# <math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
 
The result of these steps is that <math> P_e^{(n)} \ge 1 - \frac{1}{nR} - \frac{C}{R} </math>. As the block length <math>n</math> goes to infinity, we obtain <math> P_e^{(n)}</math> is bounded away from 0 if R is greater than C - we can only get arbitrarily low rates of error only if R is less than C.
 
=== ChannelStrong coding theoremconverse for non-stationarydiscrete memoryless channels ===
 
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
A strong converse theorem, proven by Wolfowitz in 1957,<ref>{{cite book |first=Robert |last=Gallager |title=Information Theory and Reliable Communication |publisher=Wiley |date=1968 |isbn=0-471-29048-3 }}</ref> states that,
 
: <math>
 
P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-\frac{n(R-C)}{2}}
</math>
 
for some finite positive constant <math>A</math>. While the weak converse states that the error probability is bounded away from zero as <math>n</math> goes to infinity, the strong converse states that the error goes to 1. Thus, <math>C</math> is a sharp threshold between perfectly reliable and completely unreliable communication.
 
== Channel coding theorem for non-stationary memoryless channels ==
 
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
 
Then the channel capacity is given by
 
: <math>
 
C=\lim\;\inf\;\;\max_{p^(X_1),p^(X_2),...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i)
C=\lim \inf \max_{p^{(X_1)},p^{(X_2)},...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i).
</math>
 
The maximum is attained at the capacity achieving distributions for each respective channel. That is,
<math>
C=\lim\; \inf\;\; \frac{1}{n}\sum_{i=1}^n C_i
</math>
where <math>C_i</math> is the capacity of the i<i>''th</i>'' channel.
=== Outline of the proof===
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in [[Asymptotic Equipartition Property]].
 
=== Outline of the proof ===
The technicality of [[lim inf]] comes into play when <math>\frac{1}{n}\sum_{i=1}^n C_i</math> does not converge.
 
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the [[asymptotic equipartition property]] article.
==References==
 
The technicality of [[lim inf]] comes into play when <math>\frac{1}{n}\sum_{i=1}^n C_i</math> does not converge.
*C. E. Shannon, The Mathematical Theory of Information. Urbana, IL:University of Illinois Press, 1949 (reprinted 1998).
 
* David J. C. MacKay. ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'' Cambridge: Cambridge University Press, 2003. ISBN 0521642981
== See also ==
* Thomas Cover, Joy Thomas, Elements of Information Theory. New York, NY:John Wiley & Sons, Inc., 1991. ISBN 0471062596
 
==See also==
* [[Error exponent]]
* [[Asymptotic equipartition property]] (AEP)
* [[Shannon-HartleyFano's theoreminequality]]
* [[Rate–distortion theory]]
* [[Shannon's source coding theorem]]
* [[Shannon–Hartley theorem]]
* [[Turbo code]]
* [[Fano's Inequality]]
* [[Super dense coding]]
 
==External linksNotes ==
{{reflist}}
* [http://www.iet.ntnu.no/projects/beats/Documents/LarsTelektronikk02.pdf On Shannon and Shannon's law]
==References ==
* [http://www.inference.phy.cam.ac.uk/mackay/itila/ On-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David MacKay]] - gives an entertaining and thorough introduction to Shannon theory, including two proofs of the noisy-channel coding theorem. This text also discusses state-of-the-art methods from coding theory, such as [[low-density parity-check code]]s, and [[Turbo code]]s.
*{{cite web |first=B. |last=Aazhang |title=Shannon's Noisy Channel Coding Theorem |date=2004 |work=Connections |publisher= |url=https://www.cse.iitd.ac.in/~vinay/courses/CSL858/reading/m10180.pdf}}
*{{cite book |author1-link=Thomas M. Cover |last1=Cover |first1=T.M. |last2=Thomas |first2=J.A. |title=Elements of Information Theory |publisher=Wiley |date=1991 |isbn=0-471-06259-6 }}
*{{cite book |author1-link=Robert Fano |first=R.M. |last=Fano |title=Transmission of information; a statistical theory of communications |publisher=MIT Press |date=1961 |isbn=0-262-06001-9 }}
*{{cite journal |last1=Feinstein |first1=Amiel |title=A new basic theorem of information theory |journal=Transactions of the IRE Professional Group on Information Theory |date=September 1954 |volume=4 |issue=4 |pages=2–22 |doi=10.1109/TIT.1954.1057459 |hdl=1721.1/4798 |bibcode=1955PhDT........12F|hdl-access=free }}
*{{cite journal |first=Lars |last=Lundheim |title=On Shannon and Shannon's Formula |journal=Telektronik |volume=98 |issue=1 |pages=20–29 |date=2002 |doi= |url=http://www.cs.miami.edu/home/burt/learning/Csc524.142/LarsTelektronikk02.pdf}}
 
*{{cite book |author1-link=David J.C. MacKay |first=David J.C. |last=MacKay |title=Information Theory, Inference, and Learning Algorithms |publisher=Cambridge University Press |date=2003 |isbn=0-521-64298-1 |pages= |url=http://www.inference.phy.cam.ac.uk/mackay/itila/book.html}} [free online]
*{{cite journal|author-link=Claude E. Shannon | doi=10.1002/j.1538-7305.1948.tb01338.x | title=A Mathematical Theory of Communication | year=1948 | last1=Shannon | first1=C. E. | journal=Bell System Technical Journal | volume=27 | issue=3 | pages=379–423 }}
*{{cite book |author-link=Claude E. Shannon |first=C.E. |last=Shannon |title=A Mathematical Theory of Communication |publisher=University of Illinois Press |orig-year=1948 |date=1998 |pages= |url=http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html}}
*{{cite journal |first=J. |last=Wolfowitz |title=The coding of messages subject to chance errors |journal=Illinois J. Math. |volume=1 |issue= 4|pages=591–606 |date=1957 |doi= 10.1215/ijm/1255380682|url=https://projecteuclid.org/download/pdf_1/euclid.ijm/1255380682|doi-access=free }}
 
{{DEFAULTSORT:Noisy-Channel Coding Theorem}}
 
[[Category:Error-detection and correction]]
[[Category:Information theory]]
[[Category:MathematicalTheorems theoremsin discrete mathematics]]
[[Category:ClaudeTelecommunication Shannontheory]]
[[Category:Coding theory]]