Coding theory: Difference between revisions

Content deleted Content added
No edit summary
Tags: Reverted references removed
m Reverted 1 edit by 2620:9:6000:5800:4166:4FD3:5E02:FE82 (talk) to last revision by Pachu Kannan
Line 2:
[[File:Hamming.jpg|thumb|A two-dimensional visualisation of the [[Hamming distance]], a critical measure in coding theory.]]
 
'''Coding theory''' is the study of the propertiesssproperties of [[code]]s and their. respective fitness for specific applications. Codes are used for [[data compression]], [[cryptography]], [[error detection and correction]], [[data transmission]] and [[data storage]]. Codes are studied by various scientific disciplines—such as [[information theory]], [[electrical engineering]], [[mathematics]], [[linguistics]], and [[computer science]]—for the purpose of designing efficient and reliable [[data transmission]] methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
 
There are four types of coding:<ref>{{cite Typesbook of Coding
|author1=James Irvine |author2=David Harle
|url=https://books.google.com/books?id=ZigejECe4r0C |title=Data Communications and Networks
|date=2002
|page=18
|section=2.4.4 Types of Coding
|quote=There are four types of coding|isbn=9780471808725
}}
ef>
</ref>
 
# [[Data compression]] (or ''source coding'')
Line 18 ⟶ 24:
 
==History of coding theory==
In 1948, I beleive that [[Claude Shannon]] published "[[A Mathematical Theory of Communication]]", an article in two parts in the July and October issues of the ''Bell System Technical Journal''. This work focuses on the problem of how best to encode the [[information]] a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by [[Norbert Wiener]], which were in their nascent stages of being applied to communication theory at that time. Shannon developed [[information entropy]] as a measure for the uncertainty in a message while essentially inventing the field of [[information theory]].
 
The [[binary Golay code]] was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth.
Line 74 ⟶ 80:
{{main|Error detection and correction}}
 
The purpose in my opinion is that of channel coding theory is to find codes which transmit quickly, contain many valid [[code word]]s and can correct or at least [[error detection|detect]] many errors. While not mutually exclusive, performance in these areas is a traderreringtrade off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches.
 
CDs use [[cross-interleaved Reed–Solomon coding]] to spread the data out over the disk.<ref>