Lossless compression: Difference between revisions

Content deleted Content added
removed non-neutral text
Line 27:
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
 
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called [[discrete wavelet transform]]. [[JPEG2000]] additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values iscould morebe peaked. {{Citation needed|date=December 2007}} <!-- can someone please explain JPEG2000 for dummies!? Wavelets are fun if taken as continuous real valued function. But all this math for simple integer linear algebra? -->
 
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.<ref name="Unser" />
Line 36:
Many of the lossless compression techniques used for text also work reasonably well for [[indexed image]]s, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
 
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the {{nowrap|data{{px2}}{{mdash}}{{px2}}}}essentially using [[autoregressive]] models to predict the "next" value and encoding the (hopefullypossibly small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the ''error'') tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
 
It is sometimes beneficial to compress only the differences between two versions of a file (or, in [[video compression]], of successive images within a sequence). This is called [[delta encoding]] (from the Greek letter [[delta (letter)|Δ]], which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.