Content deleted Content added
Add "more citations needed" tag |
|||
(4 intermediate revisions by 4 users not shown) | |||
Line 1:
{{distinguish|Exponential-Golomb coding}}
{{Short description|Lossless data compression method}}
{{More citations needed|section|date=October 2024|talk=Many sections unsourced}}
Line 22 ⟶ 23:
Formally, the two parts are given by the following expression, where {{mvar|x}} is the nonnegative integer being encoded:
and
[[File:GolombCodeRedundancy.svg|thumb|upright 1.5|This image shows the redundancy, in bits, of the Golomb code, when {{mvar|M}} is chosen optimally, for {{math| 1 − ''p''(0) ≥ 0.45}}]]
Line 33 ⟶ 34:
The integer {{mvar|x}} treated by Golomb was the run length of a [[Bernoulli process]], which has a [[geometric distribution]] starting at 0. The best choice of parameter {{mvar|M}} is a function of the corresponding Bernoulli process, which is parameterized by <math>p = P(x=0)</math> the probability of success in a given [[Bernoulli trial]]. {{mvar|M}} is either the median of the distribution or the median ±1. It can be determined by these inequalities:
which are solved by
For the example with {{math|''p''(0) {{=}} 0.2}}:
The Golomb code for this distribution is equivalent to the [[Huffman code]] for the same probabilities, if it were possible to compute the Huffman code for the infinite set of source values.
Line 68 ⟶ 69:
# Skip the 0 delimiter
# Let <math>b = \lfloor\log_2(M)\rfloor</math>
## Interpret next ''b'' bits as a binary number ''r'''. If <math>r' < 2^{b+1}-M</math> holds, then the
## Otherwise interpret ''b + 1'' bits as a binary number ''r''', the
# Compute <math>N = q * M + r</math>
Line 142 ⟶ 143:
Consider using a Rice code with a binary portion having {{mvar|b}} bits to run-length encode sequences where ''P'' has a probability {{mvar|p}}. If <math>\mathbb{P}[\text{bit is part of }k\text{-run}]</math> is the probability that a bit will be part of an {{mvar|k}}-bit run (<math>k-1</math> ''P''s and one ''Q'') and <math>(\text{compression ratio of }k\text{-run})</math> is the compression ratio of that run, then the expected compression ratio is
<!-- below mostly comes from above reference (Kiely), but not exactly, so leave uncited for now -->
\mathbb{E}[\text{compression ratio}]
&= \sum_{k=1}^\infty (\text{compression ratio of }k\text{-run}) \cdot \mathbb{P}[\text{bit is part of }k\text{-run}] \\
Line 179 ⟶ 180:
* [[Elias delta coding]]
* [[Variable-length code]]
* [[Exponential-Golomb coding]]
== References ==
Line 197 ⟶ 199:
{{DEFAULTSORT:Golomb Coding}}
[[Category:Entropy coding]]
[[Category:Data compression]]
|