Content deleted Content added
→Decoder: add explanations and a large example. |
m →External links: HTTP to HTTPS for SourceForge |
||
(16 intermediate revisions by 11 users not shown) | |||
Line 14:
}}
'''Reed–Muller codes''' are [[error-correcting code]]s that are used in wireless communications applications, particularly in deep-space communication.<ref>{{Citation|last=Massey|first=James L.|date=1992|pages=1–17|publisher=Springer-Verlag|language=en|doi=10.1007/bfb0036046|isbn=978-3540558514|title=Advanced Methods for Satellite and Deep Space Communications|volume=182|series=Lecture Notes in Control and Information Sciences|chapter=Deep-space communications and coding: A marriage made in heaven|citeseerx=10.1.1.36.4265}}[https://www.isiweb.ee.ethz.ch/archive/massey_pub/pdf/BI321.pdf pdf]</ref> Moreover, the proposed [[5G|5G standard]]<ref>{{cite web|url=http://www.3gpp.org/ftp/tsg_ran/WG1_RL1/TSGR1_87/Report/Final_Minutes_report_RAN1%2387_v100.zip|title=3GPP RAN1 meeting #87 final report|publisher=3GPP|accessdate=31 August 2017}}</ref> relies on the closely related [[Polar code (coding theory)|polar codes]]<ref>{{Cite journal|title=Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels - IEEE Journals & Magazine|journal=IEEE Transactions on Information Theory|volume=55|issue=7|pages=3051–3073|language=en-US|doi=10.1109/TIT.2009.2021379|year=2009|last1=Arikan|first1=Erdal|hdl=11693/11695|arxiv=0807.3917|s2cid=889822 }}</ref> for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in [[theoretical computer science]].
Reed–Muller codes generalize the [[Reed–Solomon error correction|Reed–Solomon codes]] and the [[Hadamard code|Walsh–Hadamard code]]. Reed–Muller codes are [[Linear code|linear block codes]] that are [[locally testable code|locally testable]], [[locally decodable code|locally decodable]], and [[List decoding|list decodable]]. These properties make them particularly useful in the design of [[probabilistically checkable proof]]s.
Line 26:
=== Encoder ===
A [[block code]] can have one or more encoding functions <math display="inline"> C:\{0,1\}^k\to\{0,1\}^{n} </math> that map messages <math display="inline"> x\in\{0,1\}^k </math> to codewords <math display="inline"> C(x)\in\{0,1\}^{n} </math>. The Reed–Muller code {{nowrap|RM(''r'', ''m'')}} has [[Block code#The message length k|message length]] <math>\textstyle k=\sum_{i=0}^r \binom{m}{i}</math> and [[Block code#The block length n|block length]] <math>\textstyle n=2^m</math>. One way to define an encoding for this code is based on the evaluation of [[multilinear polynomial]]s with ''m'' variables and [[total degree]] at most ''r''. Every multilinear polynomial over the [[finite field]] with two elements can be written as follows:
<math display="block">p_c(Z_1,\dots,Z_m) = \sum_{\underset{|S|\le r}{S\subseteq\{1,\dots,m\}}} c_S\cdot \prod_{i\in S} Z_i\,.</math>
The <math display="inline"> Z_1,\dots,Z_m </math> are the variables of the polynomial, and the values <math display="inline"> c_S\in\{0,1\} </math> are the coefficients of the polynomial.
The fact that the codeword <math>C(x)</math> suffices to uniquely reconstruct <math>x</math> follows from [[Lagrange polynomial|Lagrange interpolation]], which states that the coefficients of a polynomial are uniquely determined when sufficiently many evaluation points are given. Since <math>C(0)=0</math> and <math>C(x+y)=C(x)+C(y) \bmod 2</math> holds for all messages <math>x,y\in\{0,1\}^k</math>, the function <math>C</math> is a [[linear map]]. Thus the Reed–Muller code is a [[linear code]].
Line 74:
=== Decoder ===
As was already mentioned, Lagrange interpolation can be used to efficiently retrieve the message from a codeword. However, a decoder needs to work even if the codeword has been corrupted in a few positions, that is, when the received word is different from any codeword. In this case, a local decoding procedure can help.
Line 86 ⟶ 82:
To take into account the possibility of errors, you can also remark that you can fix the value of other variables to any value. So instead of doing the sum only once for other variables not in <math display="inline"> \mu </math> with 0 value, you do it <math display="inline"> 2^{m-d} </math> times for each fixed valuations of the other variables. If there is no error, all those sums should be equals to the value of the coefficient searched.
The algorithm consists here to take the majority of the answers as the value searched. If the minority is larger than the maximum number of errors possible, the decoding step fails knowing there are too
Once a coefficient is computed, if it's 1, update the code to remove the monomial <math display="inline"> \mu </math> from the input code and continue to next monomial, in reverse order of their degree.
Line 104 ⟶ 100:
The four sums don't agree (so we know there is an error), but the minority report is not larger than the maximum number of error allowed (1), so we take the majority and the coefficient of <math display="inline"> \mu </math> is 1.
We remove <math display="inline"> \mu </math> from the code before continue :
* <math display="inline"> \mu=X_2X_4 </math>
Line 111 ⟶ 107:
** 1100 1111 <u>00</u>00 <u>01</u>11. Sum is 1
** 1100 1111 00<u>00</u> 01<u>11</u>. Sum is 0
One error detected, coefficient is 0, no change to current code.
* <math display="inline"> \mu=X_1X_4 </math>
** <u>11</u>00 1111 <u>00</u>00 0111. Sum is 0
Line 117 ⟶ 113:
** 1100 <u>11</u>11 0000 <u>01</u>11. Sum is 1
** 1100 11<u>11</u> 0000 01<u>11</u>. Sum is 0
One error detected, coefficient is 0, no change to current code.
* <math display="inline"> \mu=X_2X_3 </math>
** <u>1</u>1<u>0</u>0 <u>1</u>1<u>1</u>1 0000 0111. Sum is 1
** 1<u>1</u>0<u>0</u> 1<u>1</u>1<u>1</u> 0000 0111. Sum is 1
** 1100 1111 <u>0</u>0<u>0</u>0 <u>0</u>1<u>1</u>1. Sum is 1
** 1100 1111 0<u>0</u>0<u>0</u> 0<u>1</u>1<u>1</u>. Sum is 0
Line 130 ⟶ 126:
** 1100 <u>1</u>1<u>0</u>0 0000 <u>0</u>1<u>0</u>0. Sum is 1
** 1100 1<u>1</u>0<u>0</u> 0000 0<u>1</u>0<u>0</u>. Sum is 0
One error detected, coefficient is 1, valuation of <math display="inline"> \mu </math> is 0000 0000 0011 0011, current code is now 1100 1100 0011 0111.
* <math display="inline"> \mu=X_1X_2</math>
** <u>1</u>100 <u>1</u>100 <u>0</u>011 <u>0</u>111. Sum is 0
Line 136 ⟶ 132:
** 11<u>0</u>0 11<u>0</u>0 00<u>1</u>1 01<u>1</u>1. Sum is 0
** 110<u>0</u> 110<u>0</u> 001<u>1</u> 011<u>1</u>. Sum is 0
One error detected, coefficient is 0, no change to current code.
We know now all coefficient of degree 2 for the polynomial, we can start mononials of degree 1. Notice that for each next degree, there are twice as much sums, and each sums is half smaller.
* <math display="inline"> \mu=X_4 </math>
Line 145 ⟶ 141:
** 1100 1100 <u>00</u>11 0111. Sum is 0
** 1100 1100 00<u>11</u> 0111. Sum is 0
** 1100 1100 0011 <u>01</u>11. Sum is
** 1100 1100 0011 01<u>11</u>. Sum is
One error detected, coefficient is 0, no change to current code.
* <math display="inline"> \mu=X_3 </math>
** <u>1</u>1<u>0</u>0 1100 0011 0111. Sum is 1
Line 161 ⟶ 157:
Then we'll find 0 for <math display="inline"> \mu=X_2 </math>, 1 for <math display="inline"> \mu=X_1 </math> and the current code become 1111 1111 1111 1011.
For the degree 0, we have 16 sums of only 1 bit. The minority is still of size 1, and we found <math display="inline"> p_x=1+X_1+X_3+
=== Generalization to larger alphabets via low-degree polynomials ===
Line 329 ⟶ 325:
=== Table of all RM(r,m) codes for m≤5 ===
All {{math|1=RM(''r'', ''m'')}} codes with <math>0\le m\le 5</math> and alphabet size 2 are displayed here, annotated with the standard [n,k,d] [[Block code#Popular notation|coding theory notation]] for [[block code]]s. The code {{math|1=RM(''r'', ''m'')}} is a <math>\textstyle [2^m,k,2^{m-r}]_2</math>-code, that is, it is a [[linear code]] over a [[binary set|binary alphabet]], has [[Block code#The block length n|block length]] <math>\textstyle 2^m</math>, [[Block code#The message length k|message length]] (or dimension) {{mvar|k}}, and [[Block code#The distance d|minimum distance]] <math>\textstyle 2^{m-r}</math>.
{|
|0▼
|1
|2
|3
|4
|5
|m
|
|-
|
Line 338 ⟶ 342:
|
|
|Z
|{{math|1=RM(''m,m'')}}<br />({{math|1=2<sup>''m''</sup>}}, {{math|1=2<sup>''m''</sup>}}, 1)
|universe codes
Line 348 ⟶ 352:
|
|RM(5,5)<br />(32,32,1)
|
|
|-
|
Line 372 ⟶ 378:
|
|{{math|1=RM(''m'' − 2, ''m'')}}<br />{{math|1=(2<sup>''m''</sup>, 2<sup>''m''</sup>−''m''−1, 4)}}
|
|-
|
Line 393 ⟶ 399:
|
|RM(2,5)<br />(32,16,8)
|{{math|1=RM(''r'', ''m''=2''r''+1)}}<br />{{math|1=(2<sup>2''r''+1</sup>, 2<sup>2''r''</sup>, 2<sup>''r''+1</sup>)}}
▲|
|[[self-dual code]]s
|-
Line 416 ⟶ 422:
|
|{{math|1=RM(1,''m'')}}<br />{{math|1=(2<sup>''m''</sup>, ''m''+1, 2<sup>''m''−1</sup>)}}
|
|-
|
Line 468 ⟶ 474:
==External links==
* [http://ocw.mit.edu MIT OpenCourseWare], 6.451 Principles of Digital Communication II, Lecture Notes section 6.4
* [
* [
*{{cite journal |last1=Weiss |first1=E. |title=Generalized Reed-Muller codes |journal=Information and Control |date=September 1962 |volume=5 |issue=3 |pages=213–222 |doi=10.1016/s0019-9958(62)90555-7 |issn=0019-9958|doi-access=
{{CCSDS}}
|