Decoding methods: Difference between revisions

Content deleted Content added
La sindrome del codice binario è un problema serio in questi giorni. Non è cummon e apparentemente non esiste una cura. Gli scienziati dicono che colpisce i più intelligenti di questo mondo e appare ogni 100 anni. Potrebbe essere causato dalla mancanza di abbracci e da giornate stressanti.
Tags: Reverted Mobile edit Mobile web edit
 
(19 intermediate revisions by 16 users not shown)
Line 1:
{{Short description|Algorithms to decode messages}}
The binary code syndrome is a serious issue these days. It isn't cummon and apparently there is no cure. Scientists say that it affects the most intellingent ones in this world and it appears every 100 years. It could be caused to lack of hugs and stressful days.
{{use dmy dates|date=December 2020|cs1-dates=y}}
In [[coding theory]], '''decoding''' is the process of translating received messages into [[Code word (communication)|codewords]] of a given [[code]]. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a [[noisy channel]], such as a [[binary symmetric channel]].
 
==Notation==
Line 17 ⟶ 19:
:# Choose any random codeword from the set of most likely codewords which is nearer to that.
:# If [[Concatenated error correction code|another code follows]], mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them
:# Report a decoding failure to the system
 
==Maximum likelihood decoding==
{{Further|Maximum likelihood}}
 
Given a received codewordvector <math>x \in \mathbb{F}_2^n</math> '''[[maximum likelihood]] decoding''' picks a codeword <math>y \in C</math> that [[Optimization (mathematics)|maximize]]s
 
:<math>\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent})</math>,
 
that is, the codeword <math>y</math> that maximizes the probability that <math>x</math> was received, [[conditional probability|given that]] <math>y</math> was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding.
In fact, by [[Bayes' Theoremtheorem]],
 
:<math>
Line 49 ⟶ 52:
As with ideal observer decoding, a convention must be agreed to for non-unique decoding.
 
The maximum likelihood decoding problem can also be modeled as an [[integer programming]] problem.<ref name = feldman>{{cite news | title=Using Linear Programming to Decode Binary Linear Codes | first1=Jon | last1=Feldman |first2=Martin J. | last2=Wainwright | first3=David R. | last3=Karger | journal=IEEE Transactions on Information Theory | volume=51 | issue=3 | pages=954–972 |date=March 2005 | doi=10.1109/TIT.2004.842696}}<"Feldman_2005"/ref>
 
The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying the [[generalized distributive law]].<ref name=GenDistLaw>{{cite journal | last1="Aji | first1=Srinivas M. | last2=McEliece | first2=Robert J. | title=The Generalized Distributive Law | journal=IEEE Transactions on Information Theory |date=March 2000 | volume=46 | issue=2 | pages=325–343 | doi=10.1109-McEliece_2000"/18.825794| url=https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf }}</ref>
 
==Minimum distance decoding==
Line 86 ⟶ 89:
==Syndrome decoding==
<!-- [[Syndrome decoding]] redirects here -->
'''Syndrome decoding''' is a highly efficient method of decoding a [[linear code]] over a ''noisy channel'', i.e. one on which errors are made. In essence, syndrome decoding is ''minimum distance decoding'' using a reduced lookup table. This is allowed by the linearity of the code.<ref>[[Albrecht name="Beutelspacher]] & Ute Rosenbaum (1998) ''Projective Geometry'', page 190, [[Cambridge University Press]] {{ISBN|0-521-48277-1}}Rosenbaum_1998"/>
</ref>
 
Suppose that <math>C\subset \mathbb{F}_2^n</math> is a linear code of length <math>n</math> and minimum distance <math>d</math> with [[parity-check matrix]] <math>H</math>. Then clearly <math>C</math> is capable of correcting up to
Line 115 ⟶ 117:
:<math>
\begin{matrix}
\sum_{i=0}^t \binom{n}{i} < |C| \\
\end{matrix}
</math>
 
== List decoding ==
only (for a binary code). The table is against pre-computed values of <math>He</math> for all possible error patterns <math>e \in \mathbb{F}_2^n</math>.
{{Main|List decoding}}
 
Knowing what <math>e</math> is, it is then trivial to decode <math>x</math> as:
 
:<math>x = z - e </math>
 
For '''Binary''' codes, if both <math>k</math> and <math>n-k</math> are not too big, and assuming the code generating matrix is in standard form, syndrome decoding can be computed using 2 precomputed lookup tables and 2 XORs only. <ref>[[Jack Keil Wolf ]] (2008) ''An Introduction to Error Correcting Codes'', Course: Communication Systems III, [[UCSD]], http://circuit.ucsd.edu/~yhk/ece154c-spr17/pdfs/ErrorCorrectionI.pdf</ref>
 
Let <math>z</math> be the received noisy codeword, i.e. <math>z=x\oplus e</math>. Using the encoding lookup table of size <math>2^k</math>, the codeword <math>z'</math> that corresponds to the first <math>k</math> bits of <math>z</math> is found.
 
The syndrome is then computed as the last <math>n-k</math> bits of <math>s=z\oplus z'</math> (the first <math>k</math> bits of the XOR are zero [since the generating matrix is in standard form] and discarded). Using the syndrome, the error <math>e</math> is computed using the syndrome lookup table of size <math>2^{n-k}</math>, and the decoding is then computed via <math>x = z \oplus e</math> (for the codeword, or the first <math>k</math> bits of <math>x</math> for the original word).
 
The number of entries in the two lookup tables is <math>2^k+2^{n-k}</math>, which is significantly smaller than <math>2^n</math> required for [[standard array|standard array decoding]] that requires only <math>1</math> lookup. Additionally, the precomputed encoding lookup table can be used for the encoding, and is thus often useful to have.
 
== Information set decoding ==
Line 139 ⟶ 130:
The simplest form is due to Prange: Let <math>G</math> be the <math>k \times n</math> generator matrix of <math>C</math> used for encoding. Select <math>k</math> columns of <math>G</math> at random, and denote by <math>G'</math> the corresponding submatrix of <math>G</math>. With reasonable probability <math>G'</math> will have full rank, which means that if we let <math>c'</math> be the sub-vector for the corresponding positions of any codeword <math>c = mG</math> of <math>C</math> for a message <math>m</math>, we can recover <math>m</math> as <math>m = c' G'^{-1}</math>. Hence, if we were lucky that these <math>k</math> positions of the received word <math>y</math> contained no errors, and hence equalled the positions of the sent codeword, then we may decode.
 
If <math>t</math> errors occurred, the probability of such a fortunate selection of columns is given by <math>\textstyle\binom{n-t}{k}/\binom{n}{k}\approx \exp(-tk/n)</math>.
 
This method has been improved in various ways, e.g. by Stern<ref name="Stern_1989"/> and [[Anne Canteaut|Canteaut]] and Sendrier.<ref name="Ohta_1998"/>
{{cite book
| author=Jacques Stern
| year=1989
| chapter=A method for finding codewords of small weight
| journal=Coding Theory and Applications
| series=Lecture Notes in Computer Science
| publisher=Springer Verlag
| volume=388
| pages=106–113
| doi=10.1007/BFb0019850
| isbn=978-3-540-51643-9}}</ref> and [[Anne Canteaut|Canteaut]] and Sendrier.<ref>
{{cite book
| last1=Ohta
| first1=Kazuo
| last2=Pei
| first2=Dingyi
| year=1998
| editor1-last=Ohta
| editor1-first=Kazuo
| editor2-first=Dingyi
| title=Cryptanalysis of the Original McEliece Cryptosystem
| journal=Advances in Cryptology — ASIACRYPT'98
| series=Lecture Notes in Computer Science
| volume=1514
| pages=187–199
| doi=10.1007/3-540-49649-1
| editor2-last=Pei
| isbn=978-3-540-65109-3| s2cid=37257901
}}</ref>
 
==Partial response maximum likelihood==
Line 182 ⟶ 144:
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using [[forward error correction]] based on a convolutional code.
The [[Hamming distance]] is used as a metric for hard decision Viterbi decoders. The ''squared'' [[Euclidean distance]] is used as a metric for soft decision decoders.
 
== Optimal decision decoding algorithm (ODDA) ==
Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.{{Clarify|date=January 2023}}<ref>{{Citation |title= Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system; |author1=Siamack Ghadimi|publisher=Universal Journal of Electrical and Electronic Engineering|date=2020}}</ref>
 
==See also==
Line 187 ⟶ 152:
* [[Error detection and correction]]
* [[Forbidden input]]
 
==Sources==
* {{cite book | last=Hill | first=Raymond | title=A first course in coding theory | publisher=[[Oxford University Press]] | series=Oxford Applied Mathematics and Computing Science Series | year=1986 | isbn=978-0-19-853803-5 | url-access=registration | url=https://archive.org/details/firstcourseincod0000hill }}
* {{cite book | last = Pless | first = Vera | authorlink=Vera Pless | title = Introduction to the theory of error-correcting codes |title-link= Introduction to the Theory of Error-Correcting Codes | publisher = [[John Wiley & Sons]]|series = Wiley-Interscience Series in Discrete Mathematics | year = 1982| isbn = 978-0-471-08684-0 }}
* {{cite book | author=J.H. van Lint | title=Introduction to Coding Theory | edition=2nd | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | year=1992 | isbn=978-3-540-54894-2 | url-access=registration | url=https://archive.org/details/introductiontoco0000lint }}
 
==References==
{{reflist}}|refs=
<ref name="Feldman_2005">{{cite journal |title=Using Linear Programming to Decode Binary Linear Codes |first1=Jon |last1=Feldman |first2=Martin J. |last2=Wainwright |first3=David R. |last3=Karger |journal=[[IEEE Transactions on Information Theory]] |volume=51 |issue=3 |pages=954–972 |date=March 2005 |doi=10.1109/TIT.2004.842696|s2cid=3120399 |citeseerx=10.1.1.111.6585 }}</ref>
<ref name="Beutelspacher-Rosenbaum_1998">{{cite book |author-first1=Albrecht |author-last1=Beutelspacher |author-link1=Albrecht Beutelspacher |author-first2=Ute |author-last2=Rosenbaum |date=1998 |title=Projective Geometry |page=190 |publisher=[[Cambridge University Press]] |isbn=0-521-48277-1}}</ref>
<ref name="Aji-McEliece_2000">{{cite journal |last1=Aji |first1=Srinivas M. |last2=McEliece |first2=Robert J. |title=The Generalized Distributive Law |journal=[[IEEE Transactions on Information Theory]] |date=March 2000 |volume=46 |issue=2 |pages=325–343 |doi=10.1109/18.825794 |url=https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf}}</ref>
<ref name="Stern_1989">{{cite book |author-first=Jacques |author-last=Stern |date=1989 |chapter=A method for finding codewords of small weight |title=Coding Theory and Applications |series=Lecture Notes in Computer Science |publisher=[[Springer-Verlag]] |volume=388 |pages=106–113 |doi=10.1007/BFb0019850 |isbn=978-3-540-51643-9}}</ref>
<ref name="Ohta_1998">{{cite book |date=1998 |editor-last1=Ohta |editor-first1=Kazuo |editor-first2=Dingyi |editor-last2=Pei |series=Lecture Notes in Computer Science |volume=1514 |pages=187–199 |doi=10.1007/3-540-49649-1 |isbn=978-3-540-65109-3 |s2cid=37257901 |title=Advances in Cryptology — ASIACRYPT'98 }}</ref>
}}
 
==Further reading==
* {{cite book | author-last=Hill | author-first=Raymond | title=A first course in coding theory | publisher=[[Oxford University Press]] | series=Oxford Applied Mathematics and Computing Science Series | yeardate=1986 | isbn=978-0-19-853803-5 | url-access=registration | url=https://archive.org/details/firstcourseincod0000hill }}
* {{cite book | author-last = Pless | author-first = Vera | authorlinkauthor-link=Vera Pless | title = Introduction to the theory of error-correcting codes |title-link= Introduction to the Theory of Error-Correcting Codes | publisher = [[John Wiley & Sons]] |series = Wiley-Interscience Series in Discrete Mathematics | year date=1982 1982| isbn = 978-0-471-08684-0 }}
* {{cite book | author-first=J.Jacobus H. |author-last=van Lint | title=Introduction to Coding Theory | edition=2nd2 | publisher=[[Springer-Verlag]] | series=[[Graduate Texts in Mathematics|GTM]] (GTM) | volume=86 | yeardate=1992 | isbn=978-3-540-54894-2 | url-access=registration | url=https://archive.org/details/introductiontoco0000lint }}
 
[[Category:Coding theory]]