Generalized minimum-distance decoding: Difference between revisions

Content deleted Content added
Adding links
SmackBot (talk | contribs)
m Dated {{Uncategorized}}. (Build p610)
Line 1:
==Introduction==
In [[coding theory]], Generalized Minimum Distance (GMD) Decoding provides an efficient [[algorithm]] for decoding [[concatenated code]]s, which is based on using an errors-and-[[Erasure_code|erasures] decoder for the [[outer code]].
 
A [[Decoding_methods|naive decoding algorithm]] for concatenated codes can not be an optimal way of decoding because it does not take into account the information that [[Maximum_likelihood_decoding|Maximum Likelihood Decoding (MLD)]] gives. In other words, in the naive algorithm, inner received [[codeword]]s are treated the same regardless of the difference between their [[hamming distance]]s. Intuitively, the outer decoder should place higher confidence in symbols whose inner encodings are close to the received word. [[David Forney]] in 1966 devised a better algorithm called Generalized Minimum Distance (GMD) Decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of [[Soft-decision_decoder|soft decision decoders]]. We will present three versions of the GMD decoding algorithm. The first two will be [[randomized algorithm]]s while the last one will be a [[deterministic algorithm]].
 
Line 47:
'''Case 1:''' <math>(c_i = C_{in}(y_i'))</math>
 
Note that if <math>y_i^{\prime\prime} = ?</math> then <math>X_i^e = 0</math>, and <math>Pr[y_i^{\prime\prime} = ?] = {2\omega_i \over d}</math> implies
<math>\mathbb{E}[X_i^?] = Pr[X_i^? = 1] = {2\omega_i \over d}</math>, and <math>\mathbb{E}[X_i^e] = Pr[X_i^e = 1] = 0</math>.
 
Line 120:
#G. David Forney. Generalized Minimum Distance decoding. ''IEEE Transactions on Information Theory'', 12:125-131, 1966
 
{{Uncategorized|date=May 2011}}
{{uncat}}