Quantization, involved in image processing, is the process seeking to reduce the number of colors required to represent an image.
The infinite number of colors available through the lens of a camera is impossible to display on a computer screen. Since a computer can display only a finite number of colors, quantization is always necessary.
Many early computers were limited in the number of colors they could display at one time -- commonly 16 (and later 256) colours. Modern computers can now display millions of colours at once, far more than can be distinguished by the human eye.
Most quantization algorithms allow you to set exactly how many colors you want to use. With the few colors available on early computers, different quantization algorithms produced very different-looking output images. As a result, a lot of time was spent on writing sophisticated algorithms to be more lifelike. Nowadays almost every algorithm produces an output indistinguishable from the view through the camera lens.
Quantization in File Formats
Although more frequent in the past, nowadays color quantization with pallets of lower than 256 colours is mainly used in GIF and PNG images. Using "nearest-neighbor" quantization and allowing fewer colours usually results in smaller file sizes, however sophisticated "random dithering" can actually inflate the final size.
A standard quantization algorithm works in 2 steps:
- Analyze the image's color usage to select the new color palette.
- Dither the image to the new color palette.
Quantization algorithms
An usual quantization algorithm is the Octree-based algorithm, as described in the MSDN article on ASP.NET color quantization
Another popular quantization algorithm is the "median cut" algorithm.