Vector quantization: Difference between revisions

Content deleted Content added
m Use in pattern recognition: clean up, replaced: Transactions on Pattern Analysis and Machine Intelligence → IEEE Transactions on Pattern Analysis and Machine Intelligence
No edit summary
Line 3:
{{Original research|date=November 2016}}
}}
'''Vector quantization''' ('''VQ''') is a classical [[Quantization (signal processing)|quantization]] technique from [[signal processing]] that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for [[data compression]]. It works by dividing a large set of points ([[coordinate vector|vector]]s) into groups having approximately the same number of points closest to them. Each group is represented by its [[centroid]] point, as in [[k-means]] and some other [[Cluster analysis|clustering]] algorithms.
 
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensional data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for [[lossy data compression]]. It can also be used for lossy data correction and [[density estimation]].