Content deleted Content added
Em3rgent0rdr (talk | contribs) shorten wording on thermometer code. |
|||
(18 intermediate revisions by 7 users not shown) | |||
Line 1:
{{Short description|Entropy encoding}}
'''Unary coding''',<ref group="nb" name="NB1"/>
{| class="wikitable"
!n (non-negative)
!n (strictly positive)
|-
| 0 || 1
| | |-
|1
|2
|-
|2
|3
|-
|3
|4
|-
|4
|5
|-
|5
|6
|-
|6
|7
|-
|7
|8
|-
|8
|9
|-
|9
|10
|}
Unary coding is an ''optimally efficient''{{Clarify|reason=This term needs some introduction.|date=May 2025}} encoding for the following discrete [[probability distribution]]{{Citation needed|date=May 2025|reason=Citation needed for this strong claim and to explain this term "optimally efficient".}}
:<math>\operatorname{P}(n) = 2^{-n}\,</math>
Line 35 ⟶ 57:
:<math>\operatorname{P}(n) = (k-1)k^{-n}\,</math>
for which ''k'' ≥ φ = 1.61803398879
:<math>\operatorname{P}(n) \ge \operatorname{P}(n+1) + \operatorname{P}(n+2)\, </math>
for <math>n=1,2,3,...</math>. Although it is the optimal symbol-by-symbol coding for such probability distributions, [[Golomb coding]] achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs.
Unary coding is both a [[prefix-free code]] and a [[self-synchronizing code]].
==Unary code in use today==
Line 51 ⟶ 75:
==Standard run-length unary codes==
All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary
{| class="wikitable"
! n !! RL code !! Next code
Line 77 ⟶ 101:
| colspan="3" | ...
|}
These codes are guaranteed to end validly on any length of data (when reading arbitrary data) and in the (separate) write cycle allow for the use and transmission of an extra bit (the one used for the first bit) while maintaining overall and per-integer unary code lengths of exactly N.
==Uniquely decodable non-prefix unary codes==
Following is an example of [[Uniquely decodable code|uniquely decodable]] unary codes that is not a [[prefix code]] and is not instantaneously decodable ([http://www.cs.ucf.edu/courses/cap5015/Huff.pdf need look-ahead to decode])
{| class="wikitable"
! n !! Unary code
!Alternative
|-
| 1 || 1
|0
|-
| 2 || 10
|01
|-
| 3 || 100
|011
|-
| 4 || 1000
|0111
|-
| 5 || 10000
|01111
|-
| 6 || 100000
|011111
|-
| 7 || 1000000
|0111111
|-
| 8 || 10000000
|01111111
|-
| 9 || 100000000
|011111111
|-
| 10 || 1000000000
|0111111111
|-
| colspan="
|}
These codes also (when writing unsigned integers) allow for the use and transmission of an extra bit (the one used for the first bit). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data.
==Symmetric unary codes==
The following symmetric unary codes can be read and instantaneously decoded in either direction:
{| class="wikitable" style="text-align: center;"
! Unary code
!Alternative
!n (non-negative)
!n (strictly positive)
|-
| 1
|0
|0
|1
|-
| 00
|11
|1
|2
|-
| 010
|101
|2
|3
|-
| 0110
|1001
|3
|4
|-
| 01110
|10001
|4
|5
|-
| 011110
|100001
|5
|6
|-
| 0111110
|1000001
|6
|7
|-
| 01111110
|10000001
|7
|8
|-
| 011111110
|100000001
|8
|9
|-
| 0111111110
|1000000001
|9
|10
|-
| colspan="4" |...
|}
== Canonical unary codes ==
{{See also|Canonical Huffman code}}
For unary values where the maximum is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. The largest ''n'' numerical '0' or '-1' ( <math>\operatorname2^{n} - 1\,</math>) and the maximum number of digits then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'.{{Clarify|date=May 2025|reason=(1) Poor grammar. (2) Are canonical codes only for the positive natural number convention? (3) Shouldn't this say that we are starting with the largest n for this 0 or 2^(n-1) code? (4) Why use red to write the final code (in this example corresponding to n=10)? Does it belong or not belong to the canonical format? (6) Why is there an extra row after the n=10 row? (Shouldn't we delete that empty row to indicate that there are no more codes after the maximum?) (7) Is the maximum code corresponding to n=10 or n=9, which this table both uses 9 digits to express. (We should delete this n=10 row if is beyond the maximum...the algorithm part about "reducing the number of digits by one" would only make sense if 9 is the maximum.)}}
{| class="wikitable"
! n !! Unary code
Line 136 ⟶ 237:
| 9 || 000000001
|111111110
|- style="color: red;"
| 10 || 000000000
|111111111
|-
| colspan="3" |
|}
Canonical codes can [http://www.cs.ucf.edu/courses/cap5015/Huff.pdf require less processing time to decode]{{Clarification|reason=Citation deals with [[Canonical Huffman code]]. Is the statement relevant only for when dealing with [[Huffman encoding]], or is this a general statement about Canonical unary code?|date=May 2025}} when they are processed as numbers not a string. If the number of codes required per symbol length is different to 1,
==Generalized unary coding==
Line 202 ⟶ 302:
[[Category:Coding theory]]
[[Category:Entropy coding]]
[[Category:Data compression]]
|