Content deleted Content added
No edit summary |
Em3rgent0rdr (talk | contribs) shorten wording on thermometer code. |
||
(25 intermediate revisions by 9 users not shown) | |||
Line 1:
{{Short description|Entropy encoding}}
'''Unary coding''',<ref group="nb" name="NB1"/> or the [[unary numeral system]], is an [[entropy encoding]] that represents a [[natural number]], ''n'', with ''n'' ones followed by a zero (if the term ''natural number'' is understood as ''non-negative integer'') or with ''n'' − 1 ones followed by a zero (if the term ''natural number'' is understood as ''strictly positive integer''). A unary number's code length would thus be ''n'' + 1 with that first definition, or ''n'' with that second definition. Unary code when vertical behaves like mercury in a [[thermometer]] that gets taller or shorter as ''n'' gets bigger or smaller, and so is sometimes called '''thermometer code'''.<ref>{{Cite web |title=University of Alberta Dictionary of Cognitive Science: Thermometer Code |url=http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/T/thermcode.html |access-date=2025-05-31 |website=www.bcp.psych.ualberta.ca}}</ref> An alternative representation uses ''n'' or ''n'' − 1 zeros followed by a one, effectively swapping the ones and zeros, [[without loss of generality]]. For example, the first ten unary codes are:
{| class="wikitable"
!n (non-negative)
!n (strictly positive)
|-
| 0 || 1
| | |-
|1
|2
|-
|2
|3
|-
|3
|4
|-
|4
|5
|-
|5
|6
|-
|6
|7
|-
|7
|8
|-
|8
|9
|-
|9
|10
|}
Unary coding is an ''optimally efficient''{{Clarify|reason=This term needs some introduction.|date=May 2025}} encoding for the following discrete [[probability distribution]]{{Citation needed|date=May 2025|reason=Citation needed for this strong claim and to explain this term "optimally efficient".}}
:<math>\operatorname{P}(n) = 2^{-n}\,</math>
Line 34 ⟶ 57:
:<math>\operatorname{P}(n) = (k-1)k^{-n}\,</math>
for which ''k'' ≥ φ = 1.61803398879
:<math>\operatorname{P}(n) \ge \operatorname{P}(n+1) + \operatorname{P}(n+2)\, </math>
for <math>n=1,2,3,...</math>. Although it is the optimal symbol-by-symbol coding for such probability distributions, [[Golomb coding]] achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs.
Unary coding is both a [[prefix-free code]] and a [[self-synchronizing code]].
==Unary code in use today==
Line 48 ⟶ 73:
==Unary coding in biological networks==
Unary coding is used in the [[neural circuit]]s responsible for [[birdsong]] production.<ref name="Fiete_2007"/><ref name="Moore_2011"/> The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC ([[high vocal center]]). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness.
==Standard run-length unary codes==
All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary i.e. N digits of the same number 1 or 0. All run-lengths by definition have at least one digit and thus represent ''strictly positive integers''.
{| class="wikitable"
! n !! RL code !! Next code
|-
| 1 || 1 || 0
|-
| 2 || 11 || 00
|-
| 3 || 111 || 000
|-
| 4 || 1111 || 0000
|-
| 5 || 11111 || 00000
|-
| 6 || 111111 || 000000
|-
| 7 || 1111111 || 0000000
|-
| 8 || 11111111 || 00000000
|-
| 9 || 111111111 || 000000000
|-
| 10 || 1111111111 || 0000000000
|-
| colspan="3" | ...
|}
These codes are guaranteed to end validly on any length of data (when reading arbitrary data) and in the (separate) write cycle allow for the use and transmission of an extra bit (the one used for the first bit) while maintaining overall and per-integer unary code lengths of exactly N.
==Uniquely decodable non-prefix unary codes==
Following is an example of [[Uniquely decodable code|uniquely decodable]] unary codes that is not a [[prefix code]] and is not instantaneously decodable ([http://www.cs.ucf.edu/courses/cap5015/Huff.pdf need look-ahead to decode])
{| class="wikitable"
! n !! Unary code
!Alternative
|-
| 1 || 1
|0
|-
| 2 || 10
|01
|-
| 3 || 100
|011
|-
| 4 || 1000
|0111
|-
| 5 || 10000
|01111
|-
| 6 || 100000
|011111
|-
| 7 || 1000000
|0111111
|-
| 8 || 10000000
|01111111
|-
| 9 || 100000000
|011111111
|-
| 10 || 1000000000
|0111111111
|-
| colspan="3" | ...
|}
These codes also (when writing unsigned integers) allow for the use and transmission of an extra bit (the one used for the first bit). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data.
==Symmetric unary codes==
The following symmetric unary codes can be read and instantaneously decoded in either direction:
{| class="wikitable" style="text-align: center;"
! Unary code
!Alternative
!n (non-negative)
!n (strictly positive)
|-
| 1
|0
|0
|1
|-
| 00
|11
|1
|2
|-
| 010
|101
|2
|3
|-
| 0110
|1001
|3
|4
|-
| 01110
|10001
|4
|5
|-
| 011110
|100001
|5
|6
|-
| 0111110
|1000001
|6
|7
|-
| 01111110
|10000001
|7
|8
|-
| 011111110
|100000001
|8
|9
|-
| 0111111110
|1000000001
|9
|10
|-
| colspan="4" |...
|}
== Canonical unary codes ==
{{See also|Canonical Huffman code}}
For unary values where the maximum is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. The largest ''n'' numerical '0' or '-1' ( <math>\operatorname2^{n} - 1\,</math>) and the maximum number of digits then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'.{{Clarify|date=May 2025|reason=(1) Poor grammar. (2) Are canonical codes only for the positive natural number convention? (3) Shouldn't this say that we are starting with the largest n for this 0 or 2^(n-1) code? (4) Why use red to write the final code (in this example corresponding to n=10)? Does it belong or not belong to the canonical format? (6) Why is there an extra row after the n=10 row? (Shouldn't we delete that empty row to indicate that there are no more codes after the maximum?) (7) Is the maximum code corresponding to n=10 or n=9, which this table both uses 9 digits to express. (We should delete this n=10 row if is beyond the maximum...the algorithm part about "reducing the number of digits by one" would only make sense if 9 is the maximum.)}}
{| class="wikitable"
! n !! Unary code
!Alternative
|-
| 1 || 1
|0
|-
| 2 || 01
|10
|-
| 3 || 001
|110
|-
| 4 || 0001
|1110
|-
| 5 || 00001
|11110
|-
| 6 || 000001
|111110
|-
| 7 || 0000001
|1111110
|-
| 8 || 00000001
|11111110
|-
| 9 || 000000001
|111111110
|- style="color: red;"
| 10 || 000000000
|111111111
|-
| colspan="3" |
|}
Canonical codes can [http://www.cs.ucf.edu/courses/cap5015/Huff.pdf require less processing time to decode]{{Clarification|reason=Citation deals with [[Canonical Huffman code]]. Is the statement relevant only for when dealing with [[Huffman encoding]], or is this a general statement about Canonical unary code?|date=May 2025}} when they are processed as numbers not a string. If the number of codes required per symbol length is different to 1, i.e. there are more non-unary codes of some length required, those would be achieved by increasing/decreasing the values numerically without reducing the length in that case.
==Generalized unary coding==
A generalized version of unary coding was presented by [[Subhash Kak]] to represent numbers much more efficiently than standard unary coding.<ref name="Kak_2015"/> Here's an example of generalized unary coding for integers from
{| class="wikitable"
! n !! Unary code !! Generalized unary
Line 99 ⟶ 295:
{{Reflist|refs=
<ref name="Fiete_2007">{{cite book |author-last1=Fiete |author-first1=I. R. |author-first2=H. S. |author-last2=Seung |chapter=Neural network models of birdsong production, learning, and coding |title=New Encyclopedia of Neuroscience |editor-first1=L. |editor-last1=Squire |editor-first2=T. |editor-last2=Albright |editor-first3=F. |editor-last3=Bloom |editor-first4=F. |editor-last4=Gage |editor-first5=N. |editor-last5=Spitzer |publisher=[[Elsevier]] |date=2007}}</ref>
<ref name="Moore_2011">{{cite journal |author-last1=Moore |author-first1=J. M. |display-authors=etal |date=2011 |title=Motor pathway convergence predicts syllable repertoire size in oscine birds |journal=Proc. Natl. Acad. Sci. USA |volume=108 |issue=39 |pages=16440–16445 |doi=10.1073/pnas.1102077108 |pmid=21918109 |pmc=3182746|bibcode=2011PNAS..10816440M |doi-access=free }}</ref>
<ref name="Kak_2015">{{cite journal |author-last1=Kak |author-first1=S. |date=2015 |title=Generalized unary coding |journal=Circuits, Systems and Signal Processing |volume=35 |issue=4 |pages=1419–1426 |doi=10.1007/s00034-015-0120-7|s2cid=27902257 }}</ref>
}}
Line 106 ⟶ 302:
[[Category:Coding theory]]
[[Category:Entropy coding]]
[[Category:Data compression]]
|