Golomb coding: Difference between revisions

Content deleted Content added
Distinguish from exponential-Golomb coding and add exp-Golomb to see also.
 
(One intermediate revision by the same user not shown)
Line 23:
Formally, the two parts are given by the following expression, where {{mvar|x}} is the nonnegative integer being encoded:
 
:<math display="block">q = \left \lfloor \frac{x}{M} \right \rfloor</math>
 
and
 
:<math display="block">r = x - qM.</math>.
 
[[File:GolombCodeRedundancy.svg|thumb|upright 1.5|This image shows the redundancy, in bits, of the Golomb code, when {{mvar|M}} is chosen optimally, for {{math| 1 − ''p''(0) &ge; 0.45}}]]
Line 34:
 
The integer {{mvar|x}} treated by Golomb was the run length of a [[Bernoulli process]], which has a [[geometric distribution]] starting at 0. The best choice of parameter {{mvar|M}} is a function of the corresponding Bernoulli process, which is parameterized by <math>p = P(x=0)</math> the probability of success in a given [[Bernoulli trial]]. {{mvar|M}} is either the median of the distribution or the median ±1. It can be determined by these inequalities:
: <math display="block">(1-p)^M + (1-p)^{M+1} \leq 1 < (1-p)^{M-1} + (1-p)^M,</math>
which are solved by
: <math display="block">M = \left\lceil -\frac{\log(2 -p)}{\log(1-p)}\right\rceil.</math>.
 
For the example with {{math|''p''(0) {{=}} 0.2}}:
: <math display="block">M = \left\lceil -\frac{\log(1.8)}{\log(0.8)}\right\rceil = \left\lceil 2.634 \right\rceil = 3.</math>.
 
The Golomb code for this distribution is equivalent to the [[Huffman code]] for the same probabilities, if it were possible to compute the Huffman code for the infinite set of source values.
Line 143:
Consider using a Rice code with a binary portion having {{mvar|b}} bits to run-length encode sequences where ''P'' has a probability {{mvar|p}}. If <math>\mathbb{P}[\text{bit is part of }k\text{-run}]</math> is the probability that a bit will be part of an {{mvar|k}}-bit run (<math>k-1</math> ''P''s and one ''Q'') and <math>(\text{compression ratio of }k\text{-run})</math> is the compression ratio of that run, then the expected compression ratio is
<!-- below mostly comes from above reference (Kiely), but not exactly, so leave uncited for now -->
:<math display="block">\begin{align}
\mathbb{E}[\text{compression ratio}]
&= \sum_{k=1}^\infty (\text{compression ratio of }k\text{-run}) \cdot \mathbb{P}[\text{bit is part of }k\text{-run}] \\