Content deleted Content added
Eelmealdeal (talk | contribs) m added mention of similar-sounding Huffman code |
|||
(38 intermediate revisions by 26 users not shown) | |||
Line 1:
{{distinguish|text=[[Huffman coding|Huffman code]]}}
{{short description|Family of linear error-correcting codes}}
{{More footnotes needed|date=March 2013}}
{{Infobox code
| name = Binary Hamming codes
Line 16 ⟶ 18:
}}
In [[computer science]] and [[
[[
In [[
Due to the limited redundancy that Hamming codes add to the data, they can only detect and correct errors when the error rate is low. This is the case in computer memory (usually RAM), where bit errors are extremely rare and Hamming codes are widely used, and a RAM with this correction system is
== History ==
[[Richard Hamming]], the inventor of Hamming codes, worked at [[Bell Labs]] in the late 1940s on the Bell [[Model V]] computer, an [[electromechanical]] relay-based machine with cycle times in seconds. Input was fed in on [[
Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to detected errors. In a taped interview, Hamming said, "And so I said, 'Damn it, if the machine can detect an error, why can't it locate the position of the error and correct it?'".<ref>{{citation|last=Thompson|first=Thomas M.|title=From Error-Correcting Codes through Sphere Packings to Simple Groups|url=https://books.google.com/books?id=ggqxuG31B3cC&q=%22From%20Error-Correcting%20Codes%20through%20Sphere%20Packings%20to%20Simple%20Groups%22&pg=PA16|pages=16–17|year=1983|series=The Carus Mathematical Monographs (#21)|publisher=Mathematical Association of America|isbn=0-88385-023-0}}</ref> Over the next few years, he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. In 1950, he published what is now known as Hamming code, which remains in use today in applications such as [[ECC memory]].
Line 34 ⟶ 36:
{{main|Parity bit}}
Parity adds a single [[
Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely and re-transmitted from scratch. On a noisy transmission medium, a successful transmission could take a long time or may never occur. However, while the quality of parity checking is poor, since it uses only a single bit, this method results in the least overhead.
Line 41 ⟶ 43:
{{main|Two-out-of-five code}}
A two-out-of-five code is an encoding scheme which uses five bits consisting of exactly three 0s and two 1s. This provides
====Repetition====
Line 62 ⟶ 64:
===General algorithm===
The following general algorithm generates a single-error correcting (SEC) code for any number of bits. The main idea is to choose the error-correcting bits such that the index-XOR (the [[
An algorithm can be deduced from the following description:
Line 72 ⟶ 74:
# Each data bit is included in a unique set of 2 or more parity bits, as determined by the binary form of its bit position.
## Parity bit 1 covers all bit positions which have the '''least''' significant bit set: bit 1 (the parity bit itself), 3, 5, 7, 9, etc.
## Parity bit 2 covers all bit positions which have the '''second''' least significant bit set: bits
## Parity bit 4 covers all bit positions which have the '''third''' least significant bit set: bits 4–7, 12–15, 20–23, etc.
## Parity bit 8 covers all bit positions which have the '''fourth''' least significant bit set: bits 8–15, 24–31, 40–47, etc.
Line 79 ⟶ 81:
If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010.
The choice of the parity, even or odd, is irrelevant but the same choice must be used for both encoding and decoding.
This general rule can be shown visually:
Line 87 ⟶ 89:
!colspan="2"| <span style="color:#888">Bit position</span>
! <span style="color:#888">1</span> !! <span style="color:#888">2</span> !! <span style="color:#888">3</span> !! <span style="color:#888">4</span> !! <span style="color:#888">5</span> !! <span style="color:#888">6</span> !! <span style="color:#888">7</span> !! <span style="color:#888">8</span> !! <span style="color:#888">9</span> !! <span style="color:#888">10</span> !! <span style="color:#888">11</span> !! <span style="color:#888">12</span> !! <span style="color:#888">13</span> !! <span style="color:#888">14</span> !! <span style="color:#888">15</span> !! <span style="color:#888">16</span> !! <span style="color:#888">17</span> !! <span style="color:#888">18</span> !! <span style="color:#888">19</span> !! <span style="color:#888">20</span>
|rowspan="7"|
|-
!colspan="2"| Encoded data bits
Line 113 ⟶ 115:
|}
Shown are only 20 encoded bits (5 parity, 15 data) but the pattern continues indefinitely. The key thing about Hamming
With {{mvar|m}} parity bits, bits from 1 up to <math>2^m-1</math> can be covered. After discounting the parity bits, <math>2^m-m-1</math> bits remain for use as data. As {{mvar|m}} varies, we get all the possible Hamming codes:
Line 135 ⟶ 137:
| 8 || 255 || 247 ||Hamming(255,247) || 247/255 ≈ 0.969
|-
| 9 || 511 || 502 ||Hamming(511,502) ||502/511 ≈ 0.982
|-
| colspan="5" | ...
|-
|{{mvar|m}}
|<math>n=2^m-1</math>
|<math>k=2^m-m-1</math>
|Hamming<math>(2^m-1,2^m-m-1)</math>
|<math>(2^m - m - 1)/(2^m-1)</math>
|}
Line 147 ⟶ 155:
If the decoder does not attempt to correct errors, it can reliably detect triple bit errors. If the decoder does correct errors, some triple errors will be mistaken for single errors and "corrected" to the wrong value. Error correction is therefore a trade-off between certainty (the ability to reliably detect triple bit errors) and resiliency (the ability to keep functioning in the face of single bit errors).
This extended Hamming code
==[7,4] Hamming code==
{{main|Hamming(7,4)}}▼
[[File:Hamming(7,4).svg|thumb|300px|Graphical depiction of the four data bits and three parity bits and which parity bits apply to which data bits]]
In 1950, Hamming introduced the [7,4] Hamming code. It encodes four data bits into seven bits by adding three parity bits. As explained earlier, it can either detect and correct single-bit errors or it can detect (but not correct) both single and double-bit errors.
▲{{main|Hamming(7,4)}}
===Construction of G and H===
Line 173 ⟶ 182:
Since [7, 4, 3] = [''n'', ''k'', ''d''] = [2<sup>''m''</sup> − 1, 2<sup>''m''</sup> − 1 − ''m'', 3]. The [[parity-check matrix]] '''H''' of a Hamming code is constructed by listing all columns of length ''m'' that are pair-wise independent.
Thus '''H''' is a matrix whose left side is all of the nonzero ''n''-tuples where order of the ''n''-tuples in the columns of matrix does not matter. The right hand side is just the (''n'' − ''k'')-[[identity matrix]].
So '''G''' can be obtained from '''H''' by taking the transpose of the left hand side of '''H''' with the identity ''k''-[[identity matrix]] on the left hand side of '''G'''.
Line 214 ⟶ 223:
\end{pmatrix} = \begin{pmatrix} 1 & 0 & 1 & 1 & 2 & 3 & 2 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 1 & 1 & 0 & 1 & 0 \end{pmatrix}</math>
===[
[[File:Hamming(8,4).svg|thumb|300px|The same [7,4] example from above with an extra parity bit. This diagram is not meant to correspond to the matrix H for this example.]]
Line 284 ⟶ 293:
* [[Coding theory]]
* [[Golay code (disambiguation)|Golay code]]
* [[Hamming bound]]▼
* [[Hamming distance]]▼
* [[Low-density parity-check code]]▼
* [[Reed–Muller code]]
* [[Reed–Solomon error correction]]
* [[Turbo code]]
▲* [[Low-density parity-check code]]
▲* [[Hamming bound]]
▲* [[Hamming distance]]
{{div col end}}
Line 297 ⟶ 306:
== References ==
{{refbegin}}
* {{cite journal|last1=Hamming|first1=Richard Wesley|title=Error detecting and error correcting codes|journal=[[Bell System Technical Journal]]|date=1950|volume=29|issue=2|pages=147–160|doi=10.1002/j.1538-7305.1950.tb00463.x|hdl=10945/46756 |s2cid=61141773 |url=https://calhoun.nps.edu/bitstream/10945/46756/1/Hamming_1982.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://calhoun.nps.edu/bitstream/10945/46756/1/Hamming_1982.pdf |archive-date=2022-10-09 |url-status=live}}
* {{cite book
|last=Moon
Line 329 ⟶ 338:
|publisher=[[swissQuant Group Leadership Team]]
|date=April 2013
|url=https://www.swissquant.com/wp-content/uploads/2017/09/Mathematical-Challenge-April-2013.pdf |archive-url=https://web.archive.org/web/20170912191717/https://www.swissquant.com/wp-content/uploads/2017/09/Mathematical-Challenge-April-2013.pdf |archive-date=2017-09-12 |url-status=live
}}
* {{cite book | first1 = Dave K. | last1 = Kythe | first2 = Prem K. | last2 = Kythe | date = 28 July 2017 | title = Algebraic and Stochastic Coding Theory | chapter = Extended Hamming Codes | publisher = CRC Press | pages = 95–116 | isbn = 978-1-351-83245-8 | chapter-url = https://books.google.com/books?id=zJwuDwAAQBAJ&pg=PA95}}
{{refend}}
|