Content deleted Content added
adding list decoding, an important class of decoding algorithms, though my formatting is probably bad. |
|||
(13 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description|Algorithms to decode messages}}
{{use dmy dates|date=December 2020|cs1-dates=y}}
In [[coding theory]], '''decoding''' is the process of translating received messages into [[Code word (communication)|codewords]] of a given [[code]]. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a [[noisy channel]], such as a [[binary symmetric channel]].
==Notation==
Line 18 ⟶ 19:
:# Choose any random codeword from the set of most likely codewords which is nearer to that.
:# If [[Concatenated error correction code|another code follows]], mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them
:# Report a decoding failure to the system
==Maximum likelihood decoding==
{{Further|Maximum likelihood}}
Given a received
:<math>\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent})</math>,
that is, the codeword <math>y</math> that maximizes the probability that <math>x</math> was received, [[conditional probability|given that]] <math>y</math> was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding.
In fact, by [[Bayes'
:<math>
Line 118 ⟶ 120:
\end{matrix}
</math>
== List decoding ==
Line 142 ⟶ 130:
The simplest form is due to Prange: Let <math>G</math> be the <math>k \times n</math> generator matrix of <math>C</math> used for encoding. Select <math>k</math> columns of <math>G</math> at random, and denote by <math>G'</math> the corresponding submatrix of <math>G</math>. With reasonable probability <math>G'</math> will have full rank, which means that if we let <math>c'</math> be the sub-vector for the corresponding positions of any codeword <math>c = mG</math> of <math>C</math> for a message <math>m</math>, we can recover <math>m</math> as <math>m = c' G'^{-1}</math>. Hence, if we were lucky that these <math>k</math> positions of the received word <math>y</math> contained no errors, and hence equalled the positions of the sent codeword, then we may decode.
If <math>t</math> errors occurred, the probability of such a fortunate selection of columns is given by <math>\textstyle\binom{n-t}{k}/\binom{n}{k}\approx \exp(-tk/n)</math>.
This method has been improved in various ways, e.g. by Stern<ref name="Stern_1989"/> and [[Anne Canteaut|Canteaut]] and Sendrier.<ref name="Ohta_1998"/>
Line 156 ⟶ 144:
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using [[forward error correction]] based on a convolutional code.
The [[Hamming distance]] is used as a metric for hard decision Viterbi decoders. The ''squared'' [[Euclidean distance]] is used as a metric for soft decision decoders.
== Optimal decision decoding algorithm (ODDA) ==
Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.{{Clarify|date=January 2023}}<ref>{{Citation |title= Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system; |author1=Siamack Ghadimi|publisher=Universal Journal of Electrical and Electronic Engineering|date=2020}}</ref>
==See also==
Line 164 ⟶ 155:
==References==
{{reflist|refs=
<ref name="Feldman_2005">{{cite
<ref name="Beutelspacher-Rosenbaum_1998">{{cite book |author-first1=Albrecht |author-last1=Beutelspacher |author-link1=Albrecht Beutelspacher |author-first2=Ute |author-last2=Rosenbaum |date=1998 |title=Projective Geometry |page=190 |publisher=[[Cambridge University Press]] |isbn=0-521-48277-1}}</ref>
<ref name="Aji-McEliece_2000">{{cite journal |last1=Aji |first1=Srinivas M. |last2=McEliece |first2=Robert J. |title=The Generalized Distributive Law |journal=[[IEEE Transactions on Information Theory]] |date=March 2000 |volume=46 |issue=2 |pages=325–343 |doi=10.1109/18.825794 |url=https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf}}</ref>
<ref name="Stern_1989">{{cite book |author-first=Jacques |author-last=Stern |date=1989 |chapter=A method for finding codewords of small weight |title=Coding Theory and Applications |series=Lecture Notes in Computer Science |publisher=[[Springer-Verlag]] |volume=388 |pages=106–113 |doi=10.1007/BFb0019850 |isbn=978-3-540-51643-9}}</ref>
<ref name="Ohta_1998">{{cite book |date=1998 |editor-last1=Ohta |editor-first1=Kazuo |editor-first2=Dingyi |editor-last2=Pei
}}
|