Noisy-channel coding theorem: Difference between revisions

Content deleted Content added
mNo edit summary
Summary: + schemes
Line 3:
The '''Shannon limit''' or '''Shannon capacity''' of a communications channel is the theoretical maximum information transfer rate of the channel, for a particular noise level.
 
== SummaryOverview ==
 
Proved by [[Claude Shannon]] in [[1948]], the theorem describes the maximum possible efficiency of [[error-correcting code|error-correcting methods]] versus levels of noise interference and data corruption. The theory doesn't describe ''how to construct'' the error-correcting method, it only tells us how good the ''best possible'' method can be. Shannon's theorem has wide-ranging applications in both communications and [[data storage device|data storage]] applications. This theorem is of foundational importance to the modern field of [[information theory]].
Line 18:
 
the probability of error at the receiver increases without bound as the transmission rate is increased. No useful information can be transmitted beyond the channel capacity.
 
Simple schemes such as "send the message 3 times and use at best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such as [[Reed-Solomon code]]s and, more recently, [[Turbo code]]s come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. With Turbo codes and the computing power in today's [[digital signal processors]], it is now possible to reach within 1/10 of one [[decibel]] of the Shannon limit.
 
== Mathematical statement ==