Justesen code: Difference between revisions

Content deleted Content added
RHaworth (talk | contribs)
JavaDfan (talk | contribs)
No edit summary
Line 1:
==Introduction==
In [[coding theory]], '''Justesen codes''' form a class of [[Error detection and correction|error-correcting codes]] which are derived from [[Reed-Solomon code]]s and have good error-control properties.
In [http://en.wikipedia.org/wiki/Coding_theory coding theory], '''Justesen codes''' form a class of [http://en.wikipedia.org/wiki/Error_detection_and_correction Error detection and correction codes] which are derived from [http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction Reed-Solomon code] and have good error-control properties. In [http://en.wikipedia.org/wiki/Concatenated_error_correction_code code concatenation], we compose an outer code <math>C_{out}</math> with an inner code <math>C_{in}</math>. By using the appropriate outer and inner code <math>C_{out}</math> and <math>C_{in}</math>, we can obtain new bound for code, such as [http://en.wikipedia.org/wiki/Zyablov_bound Zyablov bound]. The Justesen code composes the [http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction Reed-Solomon code] with the Wonzencraft ensemble and has a strongly explicit construction.Loosely speaking, the code has the strongly explicit construction if we can construct it in log space complexity. Unlike the usual code concatenation, the Justesen code obtains the power by using different linear inner codes as we see below.
 
==Definition==
Let ''R'' be a Reed-Solomon code of length ''N'' = 2<sup>''m''</sup>&nbsp;&minus;&nbsp;1, [[dimension (vector space)|rank]] ''K'' and minimum weight ''N''&nbsp;&minus;&nbsp;''K''&nbsp;+&nbsp;1. The symbols of ''R'' are elements of ''F'' = GF(2<sup>''m''</sup>) and the codewords are obtained by taking every polynomial &fnof; over ''F'' of degree less than ''K'' and listing the values of &fnof; on the non-zero elements of ''F'' in some predetermined order. Let α be a [[primitive element (finite field)|primitive element]] of ''F''. For a codeword '''a''' = (''a''<sub>1</sub>,&nbsp;...,&nbsp;''a''<sub>''N''</sub>) from ''R'', let '''b''' be the vector of length 2''N'' over ''F'' given by
 
Justesen code is [http://en.wikipedia.org/wiki/Concatenated_error_correction_code concatenation code] with different linear inner codes, which is composed of an <math>(N,K,D)_{q^k}</math> outer code <math>C_{out}</math> and different <math>(n,k,d)_q</math> inner codes <math>C_{in}^i</math>, <math>1 \le i \le N</math>. More precisely, the concatenation of these codes, denoted by <math>C_{out} \circ (C_{in}^1 ,...,C_{in}^N )</math>, is defined as follows. Given a message <math>m \in [q^k]^K</math>, we compute the codeword produced by an outer code <math>C_{out}</math>: <math>C_{out}(m) = (c_1,c_2,..,c_N)</math>. Then we apply each code of N linear inner codes to each coordinate of that codeword to produce the final codeword; that is, <math>C_{out} \circ (C_{in}^1,..,C_{in}^N)(m) = (C_{in}^1(c_1),C_{in}^2(c_2),..,C_{in}^n(c_N))</math>. Look back to the definition of the outer code and linear inner codes, this definition of the Justesen code makes sense because the codeword of the outer code is a vector with <math>N</math> elements, and we have <math>N</math> linear inner codes to apply for those <math>N</math> elements.
:<math> \mathbf{b} = \left( a_1, a_1, a_2, \alpha^1 a_2, \ldots, a_N, \alpha^{N-1} a_N \right) </math>
 
Here for the Justesen code, the outer code <math>C_{out}</math> is chosen to be [http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction Reed Solomon code] over a [http://en.wikipedia.org/wiki/Field_(mathematics) field] <math>\mathbb{F}_{q^k}</math> evaluated over <math>\mathbb{F}_{q^k}-\{ 0 \}</math> of [http://en.wikipedia.org/wiki/Code_rate rate] <math>R</math>, <math>0</math> < <math>R</math> < <math>1</math>. The outer code <math>C_{out}</math> have the relative distance <math>\delta_{out} = 1 - R</math> and block length of <math>N = q^k-1</math>. The set of inner codes is the [http://en.wikipedia.org/wiki/Wonzencraft_Ensemble Wonzencraft ensemble] <math>\{ C_{in}^\alpha \} _{\alpha \in \mathbb{F}_{q^k }^* }</math>.
and let '''c''' be the vector of length 2''N'' ''m'' obtained from ''b'' by expressing each element of ''F'' as a binary vector of length ''m''. The ''Justesen code'' is the linear code containing all such '''c'''.
 
==Property of Justesen Code==
==Properties==
The parameters of this code are length 2''m'' ''N'', dimension ''m'' ''K'' and [[minimum distance]] at least
 
As the linear codes in the Wonzencraft ensemble have the rate <math>\frac{1}{2}</math>, Justesen code is the concatenated code <math>C^* = C_{out} \circ (C_{in}^1,C_{in}^2,..,C_{in}^N)</math> with the rate <math>\frac{R}{2}</math>. We have the following theorem that estimates the distance of the concatenated code <math>C^*</math>.
:<math> \sum_{i=1}^\ell i \binom{2m}{i} , </math>
 
==Theorem==
where <math>\ell</math> is the greatest integer satisfying <math>\sum_{i=1}^\ell \binom{2m}{i}\leq N-K+1</math>.
 
Let <math>\varepsilon</math> > 0. <math>C^*</math> has relative distance at least <math>(1-R-\varepsilon)H_q^{-1}(\frac{1}{2}-\varepsilon)</math>.
The Justesen codes are examples of [[concatenated code]]s.
 
'''''Proof:'''''
== References ==
 
* {{cite journal | author=J. Justesen | title=A class of constructive asymptotically good algebraic codes | journal=IEEE Trans. Info. Theory | volume=18 | year=1972 | pages=652–656 | doi=10.1109/TIT.1972.1054893 }}
The idea of proving that the code <math>C^*</math> has the distance at least <math>(1-R-\varepsilon)H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k \cdot N</math> is to prove that the Hamming distance of two different codewords is at least <math>(1-R-\varepsilon)H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k \cdot N</math>.
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams | coauthors=N.J.A. Sloane | title=The Theory of Error-Correcting Codes | publisher=North-Holland | date=1977 | isbn=0-444-85193-3 | pages=306–316 }}
 
Denote <math>\Delta(c^1,c^2)</math> be the Hamming distance of two codewords <math>c^1</math> and <math>c^2</math>.
 
So for any given <math>m_1</math> and <math>m_2</math> in <math>(\mathbb{F}_{q^k})^K</math> (<math>m_1 \ne m_2</math>), we want to lower bound <math>\Delta(C^*(m_1),C^*(m_2))</math>.
 
Notice that if <math>C_{out}(m) = (c_1,c_2,..,c_N)</math>, then <math>C^*(m) = (C_{in}^1(c_1),C_{in}^2(c_2),..,C_{in}^N(c_N))</math>. So to the lower bound <math>\Delta(C^*(m_1),C^*(m_2))</math>, we need to take into account the distance of <math>C_{in}^i</math> for i = 1,2,…,N.
 
Suppose <math>C_{out}(m_1) = (c_1^1,c_2^1,..,c_N^1)</math> and <math>C_{out}(m_2) = (c_1^2,c_2^2,..,c_N^2)</math>.
 
Recall that <math>\{ C_{in}^i \}_{1 \le i \le N}</math> is a [http://en.wikipedia.org/wiki/Ensemble Wonzencraft ensemble]. Due to "Wonzencraft ensemble theorem", there are at least <math>(1-\varepsilon)N</math> linear codes <math>C_{in}^i</math> that have distance <math>H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k</math>.
 
So if for some <math>1 \le i \le N</math>, <math>c_i^1 \ne c_i^2</math> and <math>C_{in}^i</math> code has distance <math>\ge H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k</math>, then <math>\Delta(C_{in}^i(c_i^1),C_{in}^i(c_i^2)) \ge H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k</math>.
 
Further, if we have <math>T</math> numbers <math>1 \le i \le N</math> such that <math>c_i^1 \ne c_i^2</math> and <math>C_{in}^i</math> code has distance <math>\ge H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k</math>, then <math>\Delta(C^*(m_1),C^*(m_2)) \ge H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k \cdot T</math>.
 
So now the final task is to lower bound <math>T</math>.
 
Denote S be the set of all <math>i</math> (<math>1 \le i \le N</math>) such that <math>c_i^1 \ne c_i^2</math>. Then <math>T</math> is the number of linear codes <math>C_{in}^i</math> (<math>i \in S</math>) having the distance <math>H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k</math>.
 
Now we want to estimate <math>\left| S \right|</math>. Obviously <math>\left| S \right| = \Delta(C_{out}(m_1),C_{out}(m_2)) \ge (1-R)N</math>.
 
Due to the [http://en.wikipedia.org/wiki/Ensemble Wonzencraft Ensemble Theorem], there are at most <math>\varepsilon N</math> linear codes having distance less than <math>H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k</math>, so <math>T \ge \left| S \right| - \varepsilon N \ge (1-R)N - \varepsilon N = (1-R-\varepsilon)N</math>.
 
Finally,we have
 
<math>\Delta(C^*(m_1),C^*(m_2)) \ge H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k \cdot T \ge H_q^{-1}(\frac{1}{2}-\varepsilon) \cdot 2k \cdot (1-R-\varepsilon) \cdot N</math>.
 
This is true for any arbitrary <math>m_1 \ne m_2</math>. So <math>C^*</math> has the relative distance at least <math>(1-R-\varepsilon)H_q^{-1}(\frac{1}{2}-\varepsilon)</math>, which completes the proof.
 
==Comments==
We want to consider the "strongly explicit code". So the question is what the "strongly explicit code" is. Loosely speaking, for linear code, the "explicit" property is related to the complexity of constructing its generator matrix G. That means, we can compute the matrix in logarithmic space without using the brute force algorithm to verify that a code has a given satisfied distance.
 
For the other codes that are not linear, we can consider the complexity of the encoding algorithm.
 
So by far, we can see that the Wonzencraft ensemble and Reed-Solomon codes are strongly explicit. Therefore we have the following result:
 
'''''Corollary:''''' The concatenated code <math>C^*</math> is an asymptotically good code(that is, rate <math>R</math> > 0 and relative distance <math>\delta</math> > 0 for small q) and has a strongly explicit construction.
 
==See Also==
# [http://en.wikipedia.org/wiki/Wonzencraft_Ensemble Wonzencraft Ensemble]
# [http://en.wikipedia.org/wiki/Concatenated_error_correction_code Concatenated error correction code]
# [http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction Reed-Solomon error correction]
# [http://en.wikipedia.org/wiki/Linear_code Linear Code]
 
== References ==
# [http://www.cse.buffalo.edu/~atri/courses/coding-theory/ Lecture 28: Justesen Code. Coding theory's course. Prof. Atri Rudra].
# [http://people.csail.mit.edu/madhu/FT02/ Lecture 6: Concatenated codes. Forney codes. Justesen codes. Essential Coding Theory].
*# {{cite journal | author=J. Justesen | title=A class of constructive asymptotically good algebraic codes | journal=IEEE Trans. Info. Theory | volume=18 | year=1972 | pages=652–656 | doi=10.1109/TIT.1972.1054893 }}
*# {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams | coauthors=N.J.A. Sloane | title=The Theory of Error-Correcting Codes | publisher=North-Holland | date=1977 | isbn=0-444-85193-3 | pages=306–316 }}
 
[[Category:Error detection and correction]]