Generalized minimum-distance decoding: Difference between revisions

Content deleted Content added
References: punctuation
FrescoBot (talk | contribs)
m Bot: links syntax
Line 8:
# [[Hamming distance]] : Given two [[vector]]s <math>u, v\in\sum^n</math> the Hamming distance between u and v, denoted by <math>\Delta(u, v)</math>, is defined to be the number of positions in which u and v differ.
# Minimum distance : Let <math>C\subseteq\sum^n</math> be a [[code]]. The minimum distance of code C is defined to be <math>d = \min{\Delta(c_1, c_2)}</math> where <math>c_1 \ne c_2 \in C</math>
# Code concatenation : Given <math>m = (m_1, \ldots, m_K) \in [Q]^K</math>, consider two codes which we call outer code and inner code <math>C_\text{out} = [Q]K \rightarrow [Q]N, C_\text{in} : [q]k \rightarrow [q]n</math>, and their distances are <math>D</math> and <math>d</math>. A concatenated code can be achieved by <math>C_\text{out} \circ C_\text{in} (m) = (C_\text{in} (C_\text{out} (m)_1), \ldots, C_\text{in} (C_\text{out} (m)_N ))</math> where <math>C_\text{out}(m) = ((C_\text{out} (m)_1, \ldots, (m)_N ))</math>. Finally we will take <math>C_\text{out}</math> to be [http://en.wikipedia.org/wiki/Reed_Solomon[Reed Solomon|RS code]], which has an errors and erasure decoder, and <math>K = O(\log{N})</math>, which in turn implies that MLD on the inner code will be poly(<math>N</math>) time.
# Maximum likelihood decoding(MLD) : MLD is a decoding method for error correcting codes, which outputs the codeword closest to the received word in Hamming distance. The MLD function denoted by <math>D_{MLD} : \sum^n \rightarrow C</math> is defined as follows. For every <math>y\in\sum_n</math>, <math>D_{MLD}(y) = \arg \min_{c \in C}\Delta(c, y)</math>.
# [[Probability density function]] : A [[probability distribution]] <math>\Pr[\bullet]</math> on a sample space <math>S</math> is a mapping from events of <math>S</math> to [[real number]]s such that <math>\Pr[A] \ge 0</math> for any event <math>A</math>, <math>\Pr[S] = 1</math>, and <math>\Pr[A \cup B] = \Pr[A] + \Pr[B]</math> for any two mutually exclusive events <math>A</math> and <math>B</math>
Line 35:
<math>\sum_{i=1}^N e_i < Dd\over2\qquad\qquad (1)</math>
 
Next for every <math>1 \le i \le N</math>, we define two [http://en.wikipedia.org/wiki/Indicator_variable[Indicator variable|indicator variables]]:
 
: <math>X{_i^?} = 1</math> iff <math>y_i^{\prime\prime} = ?,</math>
Line 51:
: <math>\mathbb{E}[2X{_i^e + X{_i^?}}] \le {{2e_i} \over d}\qquad\qquad (2)</math>
 
Clearly, by definition <math>e' = \sum_{i}X{_i^e}</math> and <math>s' = \sum_{i}X{_i^?}</math>. Futher, by the [http://en.wikipedia.org/wiki/[Linear |linearity]] of expectation, we get <math>\mathbb{E}[2e' + s'] \le {2 \over d}\sum_ie_i < D</math>. We consider two cases to prove (2) : <math>i'th</math> block is correctly decoded('''Case 1'''), <math>i'th</math> block is incorrectly decoded('''Case 2''')
 
'''Case 1:''' <math>(c_i = C_\text{in}(y_i'))</math>
Line 116:
 
==See also==
#[http://en.wikipedia.org/wiki/Concatenated_code[Concatenated code|Concatenated codes]]
#[http://en.wikipedia.org/wiki/Reed_Solomon[Reed Solomon|Reed Solomon error correction]]
#[http://en.wikipedia.org/wiki/Berlekamp–Welch_algorithm[Berlekamp–Welch algorithm|Welch Berlekamp algorithm]]
 
==References==