Huffman coding

This is an old revision of this page, as edited by Alex Vinokur~enwiki (talk | contribs) at 05:17, 4 July 2004 (Main properties and variations). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


In computer science, Huffman coding is an entropy encoding algorithm used for data compression that finds the optimal system of encoding strings based on the relative frequency of each character. It was developed by David A. Huffman as a PhD student at MIT in 1952, and published in A Method for the Construction of Minimum-Redundancy Codes.

Huffman coding uses a specific method for choosing the representations for each symbol, resulting in a prefix-free code (that is, no bit string of any symbol is a prefix of the bit string of any other symbol) that expresses the most common characters in the shortest way possible. It has been proven that Huffman coding is the most effective compression method of this type: no other mapping of source symbols to strings of bits will produce a smaller output when the actual symbol frequencies agree with those used to create the code.

For a set of symbols whose cardinality is a power of two and a uniform probability distribution, Huffman coding is equivalent to simple binary block encoding.

History

In 1951 David Huffman and his MIT information theory classmates were given the choice of a term paper or a final exam. The professor, Robert M. Fano, assigned a term paper on the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sorted binary tree, and quickly proved this method the most efficient.

A similar idea had been used before, in Shannon-Fano coding (created by Claude Shannon, inventor of information theory, and Fano, Huffman's teacher!), but Huffman fixed its major flaw by building the tree from the bottom up instead of from the top down.

Basic technique

The technique works by creating a binary tree of symbols:

  1. Start with as many trees as there are symbols.
  2. While there is more than one tree:
    1. Find the two trees with the smallest total weight.
    2. Combine the trees into one, setting one as the left child and the other as the right.
  3. Now the tree contains all the symbols. A '0' represents following the left child; a '1' represents following the right child.

Main properties

The frequencies used can be generic ones for the application ___domain that are based on average experience, or they can be the actual frequencies found in the text being compressed. (This variation requires that a frequency table or other hint as to the encoding must be stored with the compressed text; implementations employ various tricks to store these tables efficiently.)

Huffman coding is optimal when the probability of each input symbol is a power of two. Prefix-free codes tend to have slight inefficiency on small alphabets, where probabilities often fall between powers of two. Expanding the alphabet size by coalescing multiple symbols into "words" before Huffman coding can help a bit.

Extreme cases of Huffman codes are connected with Fibonacci numbers.

Arithmetic coding produces slight gains over Huffman coding, but in practice these gains have not been large enough to offset arithmetic coding's higher computational complexity and patent royalties (As of November 2001, IBM owns patents on the core concepts of arithmetic coding in several jurisdictions.)

Variations

Adaptive Huffman coding

A variation called adaptive Huffman coding calculates the frequencies dynamically based on recent actual frequencies in the source string. This is somewhat related to the LZ family of algorithms.

Huffman Template algorithm

Most often, the weights used in implementations of Huffman coding represent numeric probabilities, but the algorithm given above does not require this; it requires only a way to order weights and to add them. Huffman Template algorithm enables to use non-numerical weights (costs, frequences).

n-ary Huffman coding

The n-ary Huffman algorithm uses the {0, 1, ..., n-1} alphabet to encode message and build an n-ary tree.



Applications

Huffman coding today is often used as a "back-end" to some other compression method. DEFLATE (PKZIP's algorithm) and multimedia codecs such as JPEG and MP3 have a front-end model and quantization followed by Huffman coding.