Content deleted Content added
(42 intermediate revisions by 30 users not shown) | |||
Line 1:
{{Short description|Algorithms to decode messages}}
{{use dmy dates|date=December 2020|cs1-dates=y}}
In [[coding theory]], '''decoding''' is the process of translating received messages into [[Code word (communication)|codewords]] of a given [[code]]. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a [[noisy channel]], such as a [[binary symmetric channel]].
==Notation==
Line 7:
==Ideal observer decoding==
One may be given the message <math>x \in \mathbb{F}_2^n</math>, then '''ideal observer decoding''' generates the codeword <math>y \in C</math>. The process results in this solution:
Line 14 ⟶ 13:
For example, a person can choose the codeword <math>y</math> that is most likely to be received as the message <math>x</math> after transmission.
===
Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include:▼
▲Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include:
▲:# Request that the codeword be resent -- [[automatic repeat-request]]
:# Choose any random codeword from the set of most likely codewords which is nearer to that.
:# If [[Concatenated error correction code|another code follows]], mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them
:# Report a decoding failure to the system
==Maximum likelihood decoding==
{{Further|Maximum likelihood}}
Given a received
:<math>\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent})</math>,
that is, the codeword <math>y</math> that maximizes the probability that <math>x</math> was received, [[conditional probability|given that]] <math>y</math> was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding.
In fact, by [[Bayes'
:<math>
Line 40:
Upon fixing <math>\mathbb{P}(x \mbox{ received})</math>, <math>x</math> is restructured and
<math>\mathbb{P}(y \mbox{ sent})</math> is constant as all codewords are equally likely to be sent.
Therefore,
<math>
\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent})
Line 52:
As with ideal observer decoding, a convention must be agreed to for non-unique decoding.
The maximum likelihood decoding problem can also be modeled as an [[integer programming]] problem.<ref name
The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying the [[generalized distributive law]].<ref name=
==Minimum distance decoding==
Given a received codeword <math>x \in \mathbb{F}_2^n</math>, '''minimum distance decoding''' picks a codeword <math>y \in C</math> to minimise the [[Hamming distance]]
▲Given a received codeword <math>x \in \mathbb{F}_2^n</math>, '''minimum distance decoding''' picks a codeword <math>y \in C</math> to minimise the [[Hamming distance]] :
:<math>d(x,y) = \# \{i : x_i \not = y_i \}</math>
Line 81 ⟶ 80:
Minimum distance decoding is also known as ''nearest neighbour decoding''. It can be assisted or automated by using a [[standard array]]. Minimum distance decoding is a reasonable decoding method when the following conditions are met:
:#The probability <math>p</math> that an error occurs is independent of the position of the symbol.
:#Errors are independent events
These assumptions may be reasonable for transmissions over a [[binary symmetric channel]]. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords.
Line 90 ⟶ 89:
==Syndrome decoding==
<!-- [[Syndrome decoding]] redirects here -->
'''Syndrome decoding''' is a highly efficient method of decoding a [[linear code]] over a ''noisy channel'', i.e. one on which errors are made. In essence, syndrome decoding is ''minimum distance decoding'' using a reduced lookup table. This is allowed by the linearity of the code.<ref
Suppose that <math>C\subset \mathbb{F}_2^n</math> is a linear code of length <math>n</math> and minimum distance <math>d</math> with [[parity-check matrix]] <math>H</math>. Then clearly <math>C</math> is capable of correcting up to
Line 111 ⟶ 109:
:<math>Hz = H(x+e) =Hx + He = 0 + He = He</math>
To perform [[#Maximum_likelihood_decoding|ML decoding]] in a [[
Note that this is already of significantly less complexity than that of a [[standard array |Standard array decoding]].▼
▲Note that this is already of significantly less complexity than that of a [[standard array
However, under the assumption that no more than <math>t</math> errors were made during transmission, the receiver can look up the value <math>He</math> in a further reduced table of size ▼
▲However, under the assumption that no more than <math>t</math> errors were made during transmission, the receiver can look up the value <math>He</math> in a further reduced table of size
:<math>
\begin{matrix}
\sum_{i=0}^t \binom{n}{i}
\end{matrix}
</math>
== List decoding ==
{{Main|List decoding}}
== Information set decoding ==
This is a family of [[Las Vegas algorithm|Las Vegas]]-probabilistic methods all based on the observation that it is easier to guess enough error-free positions, than it is to guess all the error-positions.
The simplest form is due to Prange: Let <math>G</math> be the <math>k \times n</math> generator matrix of <math>C</math> used for encoding. Select <math>k</math> columns of <math>G</math> at random, and denote by <math>G'</math> the corresponding submatrix of <math>G</math>. With reasonable probability <math>G'</math> will have full rank, which means that if we let <math>c'</math> be the sub-vector for the corresponding positions of any codeword <math>c = mG</math> of <math>C</math> for a message <math>m</math>, we can recover <math>m</math> as <math>m = c' G'^{-1}</math>. Hence, if we were lucky that these <math>k</math> positions of the received word <math>y</math> contained no errors, and hence equalled the positions of the sent codeword, then we may decode.
== Partial response maximum likelihood ==▼
If <math>t</math> errors occurred, the probability of such a fortunate selection of columns is given by <math>\textstyle\binom{n-t}{k}/\binom{n}{k}\approx \exp(-tk/n)</math>.
This method has been improved in various ways, e.g. by Stern<ref name="Stern_1989"/> and [[Anne Canteaut|Canteaut]] and Sendrier.<ref name="Ohta_1998"/>
{{Main|PRML}}
Partial response maximum likelihood ([[PRML]]) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal.
==
{{Main|Viterbi decoder}}
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using [[forward error correction]] based on a convolutional code.
The [[Hamming distance]] is used as a metric for hard decision Viterbi decoders. The ''squared'' [[Euclidean distance]] is used as a metric for soft decision decoders.
== Optimal decision decoding algorithm (ODDA) ==
Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.{{Clarify|date=January 2023}}<ref>{{Citation |title= Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system; |author1=Siamack Ghadimi|publisher=Universal Journal of Electrical and Electronic Engineering|date=2020}}</ref>
==
* [[Don't care alarm]]
* [[Error detection and correction]]
* [[Forbidden input]]
==
* {{cite book | last=Hill | first=Raymond | title=A first course in coding theory | publisher=[[Oxford University Press]] | series=Oxford Applied Mathematics and Computing Science Series | year=1986 | isbn=0-19-853803-0 }}▼
<ref name="Feldman_2005">{{cite journal |title=Using Linear Programming to Decode Binary Linear Codes |first1=Jon |last1=Feldman |first2=Martin J. |last2=Wainwright |first3=David R. |last3=Karger |journal=[[IEEE Transactions on Information Theory]] |volume=51 |issue=3 |pages=954–972 |date=March 2005 |doi=10.1109/TIT.2004.842696|s2cid=3120399 |citeseerx=10.1.1.111.6585 }}</ref>
* {{cite book | last = Pless | first = Vera | authorlink=Vera Pless | title = Introduction to the theory of error-correcting codes | publisher = [[John Wiley & Sons]]|series = Wiley-Interscience Series in Discrete Mathematics | year = 1982| isbn = 0-471-08684-3 }}▼
<ref name="Beutelspacher-Rosenbaum_1998">{{cite book |author-first1=Albrecht |author-last1=Beutelspacher |author-link1=Albrecht Beutelspacher |author-first2=Ute |author-last2=Rosenbaum |date=1998 |title=Projective Geometry |page=190 |publisher=[[Cambridge University Press]] |isbn=0-521-48277-1}}</ref>
* {{cite book | author=J.H. van Lint | title=Introduction to Coding Theory | edition=2nd | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | year=1992 | isbn=3-540-54894-7 }}▼
<ref name="Aji-McEliece_2000">{{cite journal |last1=Aji |first1=Srinivas M. |last2=McEliece |first2=Robert J. |title=The Generalized Distributive Law |journal=[[IEEE Transactions on Information Theory]] |date=March 2000 |volume=46 |issue=2 |pages=325–343 |doi=10.1109/18.825794 |url=https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf}}</ref>
<ref name="Stern_1989">{{cite book |author-first=Jacques |author-last=Stern |date=1989 |chapter=A method for finding codewords of small weight |title=Coding Theory and Applications |series=Lecture Notes in Computer Science |publisher=[[Springer-Verlag]] |volume=388 |pages=106–113 |doi=10.1007/BFb0019850 |isbn=978-3-540-51643-9}}</ref>
<ref name="Ohta_1998">{{cite book |date=1998 |editor-last1=Ohta |editor-first1=Kazuo |editor-first2=Dingyi |editor-last2=Pei |series=Lecture Notes in Computer Science |volume=1514 |pages=187–199 |doi=10.1007/3-540-49649-1 |isbn=978-3-540-65109-3 |s2cid=37257901 |title=Advances in Cryptology — ASIACRYPT'98 }}</ref>
}}
==Further
▲* {{cite book |
▲{{reflist}}
▲* {{cite book |
▲* {{cite book |
[[Category:Coding theory]]
|