Decoding methods: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: s2cid. Removed parameters. | Use this bot. Report bugs. | #UCB_CommandLine
 
(5 intermediate revisions by 5 users not shown)
Line 1:
{{Short description|Algorithms to decode messages}}
{{use dmy dates|date=December 2020|cs1-dates=y}}
In [[coding theory]], '''decoding''' is the process of translating received messages into [[Code word (communication)|codewords]] of a given [[code]]. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a [[noisy channel]], such as a [[binary symmetric channel]].
 
==Notation==
Line 18 ⟶ 19:
:# Choose any random codeword from the set of most likely codewords which is nearer to that.
:# If [[Concatenated error correction code|another code follows]], mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them
:# Report a decoding failure to the system
 
==Maximum likelihood decoding==
Line 27 ⟶ 29:
 
that is, the codeword <math>y</math> that maximizes the probability that <math>x</math> was received, [[conditional probability|given that]] <math>y</math> was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding.
In fact, by [[Bayes' Theoremtheorem]],
 
:<math>
Line 128 ⟶ 130:
The simplest form is due to Prange: Let <math>G</math> be the <math>k \times n</math> generator matrix of <math>C</math> used for encoding. Select <math>k</math> columns of <math>G</math> at random, and denote by <math>G'</math> the corresponding submatrix of <math>G</math>. With reasonable probability <math>G'</math> will have full rank, which means that if we let <math>c'</math> be the sub-vector for the corresponding positions of any codeword <math>c = mG</math> of <math>C</math> for a message <math>m</math>, we can recover <math>m</math> as <math>m = c' G'^{-1}</math>. Hence, if we were lucky that these <math>k</math> positions of the received word <math>y</math> contained no errors, and hence equalled the positions of the sent codeword, then we may decode.
 
If <math>t</math> errors occurred, the probability of such a fortunate selection of columns is given by <math>\textstyle\binom{n-t}{k}/\binom{n}{k}\approx \exp(-tk/n)</math>.
 
This method has been improved in various ways, e.g. by Stern<ref name="Stern_1989"/> and [[Anne Canteaut|Canteaut]] and Sendrier.<ref name="Ohta_1998"/>
Line 153 ⟶ 155:
==References==
{{reflist|refs=
<ref name="Feldman_2005">{{cite journal |title=Using Linear Programming to Decode Binary Linear Codes |first1=Jon |last1=Feldman |first2=Martin J. |last2=Wainwright |first3=David R. |last3=Karger |journal=[[IEEE Transactions on Information Theory]] |volume=51 |issue=3 |pages=954–972 |date=March 2005 |doi=10.1109/TIT.2004.842696|s2cid=3120399 |citeseerx=10.1.1.111.6585 }}</ref>
<ref name="Beutelspacher-Rosenbaum_1998">{{cite book |author-first1=Albrecht |author-last1=Beutelspacher |author-link1=Albrecht Beutelspacher |author-first2=Ute |author-last2=Rosenbaum |date=1998 |title=Projective Geometry |page=190 |publisher=[[Cambridge University Press]] |isbn=0-521-48277-1}}</ref>
<ref name="Aji-McEliece_2000">{{cite journal |last1=Aji |first1=Srinivas M. |last2=McEliece |first2=Robert J. |title=The Generalized Distributive Law |journal=[[IEEE Transactions on Information Theory]] |date=March 2000 |volume=46 |issue=2 |pages=325–343 |doi=10.1109/18.825794 |url=https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf}}</ref>