Content deleted Content added
m lead format |
m Open access bot: hdl updated in citation with #oabot. |
||
(190 intermediate revisions by more than 100 users not shown) | |||
Line 1:
{{short description|Classical quantization technique from signal processing}}
{{Multiple issues|
'''Vector quantization''' is a classical [[quantization]] technique from [[signal processing]] which allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for [[data compression]]. It works by dividing a large set of points ([[coordinate vector|vector]]s) into groups having approximately the same number of points closest to them. Each group is represented by its [[centroid]] point, as in [[k-means]] and some other [[clustering]] algorithms.▼
{{Missing information|something|date=February 2009}}
{{Original research|date=November 2016}}
{{Technical|date=October 2023}}
}}
▲'''Vector quantization''' ('''VQ''') is a classical [[Quantization (signal processing)|quantization]] technique from [[signal processing]]
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-
Vector quantization is based on the
== Training ==
</ref>
# Pick a sample point at random
# Move the nearest quantization vector centroid towards this sample point, by a small fraction of the distance
# Repeat
A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter {{Citation needed|date=November 2016}}:
# Increase each centroid's sensitivity <math>s_i</math> by a small amount
# Pick a sample point <math>P</math> at random
#
# Move <math>c_i</math> towards <math>P</math> by a small fraction of the distance
# Set <math>s_i</math> to zero
# Repeat
It is desirable to use a cooling schedule to produce convergence: see [[Simulated annealing]]. Another (simpler) method is [[Linde–Buzo–Gray algorithm|LBG]] which is based on [[K-means clustering|K-Means]].
The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data
== Applications ==
Vector quantization is used for lossy data compression, lossy data correction,
Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid.
Line 34 ⟶ 41:
=== Use in data compression ===
Vector quantization, also called "block quantization" or "pattern matching quantization" is often used in [[lossy data compression]]. It works by encoding values from a multidimensional [[vector space]] into a finite set of values from a discrete [[linear subspace|subspace]] of lower dimension. A lower-space vector requires less storage space, so the data is compressed.
The transformation is usually done by [[projection (mathematics)|projection]] or by using a [[codebook]]. In some cases, a codebook can be also used to [[entropy code]] the discrete value in the same step, by generating a [[prefix code]]d variable-length encoded value as its output.
The set of discrete amplitude levels is quantized jointly rather than each sample being quantized separately. Consider a ''
All possible combinations of the ''
<!--Block Diagram:▼
▲Block Diagram:
A simple vector quantizer is shown below
Only the index of the codeword in the codebook is sent instead of the quantized values. This conserves space and achieves more compression.
[[TwinVQ#TwinVQ in MPEG-4|Twin vector quantization]] (VQF) is part of the [[MPEG-4]] standard dealing with time ___domain weighted interleaved vector quantization.
=== Video codecs based on vector quantization ===
{{Expand list|date=August 2008}}
* [[Bink video]]<ref>{{cite web
| title = Bink video
| work = Book of Wisdom
| date = 2009-12-27
| url = http://lists.mplayerhq.hu/pipermail/bow/2009-December/000058.html
| access-date = 2013-03-16 }}
</ref>
* [[Cinepak]]
* [[Daala]] is transform-based but uses [[pyramid vector quantization]] on transformed coefficients<ref>{{cite IETF |title= Pyramid Vector Quantization for Video Coding | first1= JM. |last1= Valin | draft=draft-valin-videocodec-pvq-00 | date=October 2012 |publisher=[[Internet Engineering Task Force|IETF]] |access-date=2013-12-17 |url=https://tools.ietf.org/html/draft-valin-videocodec-pvq-00}} See also arXiv:1602.05209</ref>
* [[Sorenson codec]]▼
* [[Digital Video Interactive]]: Production-Level Video and Real-Time Video
* [[Indeo]]
* [[Microsoft Video 1]]
* Westwood's VQA format, used in many games▼
* [[QuickTime#QuickTime 1.x|QuickTime]]: [[Apple Video]] (RPZA) and [[QuickTime Graphics Codec|Graphics Codec]] (SMC)
▲* [[Sorenson codec|Sorenson]] SVQ1 and SVQ3
* [[Smacker video]]
The usage of video codecs based on vector quantization has declined significantly in favor of those based on [[Motion compensation#Block motion compensation|motion compensated]] prediction combined with [[Transform coding#Digital|transform coding]], e.g. those defined in [[MPEG]] standards, as the low decoding complexity of vector quantization has become less relevant.
=== Audio codecs based on vector quantization ===
{{Expand list|date=August 2008}}
* [[
* [[
* [[CELT]] (now part of [[Opus (codec)|Opus]]) is transform-based but uses [[pyramid vector quantization]] on transformed coefficients
* [[Codec 2]]
* [[DTS Coherent Acoustics|DTS]]▼
* [[G.729]]
* [[iLBC]]
* [[Ogg Vorbis]]<ref>
{{cite web
| title = Vorbis I Specification
| publisher = Xiph.org
| date = 2007-03-09
| url =
|
</ref>
* [[
▲* [[DTS Coherent Acoustics|DTS]]
=== Use in pattern recognition ===
VQ was also used in the eighties for speech<ref>{{cite book|last=Burton|first=D. K.|author2=Shore, J. E. |author3=Buck, J. T. |title=ICASSP '83. IEEE International Conference on Acoustics, Speech, and Signal Processing |chapter=A generalization of isolated word recognition using vector quantization |volume=8|year=1983|pages=1021–1024|doi=10.1109/ICASSP.1983.1171915}}</ref> and [[speaker recognition]].<ref>{{cite book|last=Soong|first=F.|author2=A. Rosenberg |author3=L. Rabiner |author4=B. Juang |title=ICASSP '85. IEEE International Conference on Acoustics, Speech, and Signal Processing |chapter=A vector quantization approach to speaker recognition |year=1985|volume=1|pages=387–390|doi=10.1109/ICASSP.1985.1168412|s2cid=8970593}}</ref>
Recently it has also been used for efficient [[nearest neighbor search]]
<ref>{{cite journal|author=H. Jegou |author2=M. Douze |author3=C. Schmid|title=Product Quantization for Nearest Neighbor Search|journal=IEEE Transactions on Pattern Analysis and Machine Intelligence|year=2011|volume=33|issue=1|pages=117–128|doi=10.1109/TPAMI.2010.57|pmid=21088323 |url=http://hal.archives-ouvertes.fr/docs/00/51/44/62/PDF/paper_hal.pdf |archive-url=https://web.archive.org/web/20111217142048/http://hal.archives-ouvertes.fr/docs/00/51/44/62/PDF/paper_hal.pdf |archive-date=2011-12-17 |url-status=live|citeseerx=10.1.1.470.8573 |s2cid=5850884 }}</ref>
and on-line signature recognition.<ref>{{cite journal|last=Faundez-Zanuy|first=Marcos|title=offline and On-line signature recognition based on VQ-DTW|journal=Pattern Recognition|year=2007|volume=40|issue=3|pages=981–992|doi=10.1016/j.patcog.2006.06.007}}</ref>
In [[pattern recognition]] applications, one codebook is constructed for each class (each class being a user in biometric applications) using acoustic vectors of this user. In the testing phase the quantization distortion of a testing signal is worked out with the whole set of codebooks obtained in the training phase. The codebook that provides the smallest vector quantization distortion indicates the identified user.
The main advantage of VQ in [[pattern recognition]] is its low computational burden when compared with other techniques such as [[dynamic time warping]] (DTW) and [[hidden Markov model]] (HMM). The main drawback when compared to DTW and HMM is that it does not take into account the temporal evolution of the signals (speech, signature, etc.) because all the vectors are mixed up. In order to overcome this problem a multi-section codebook approach has been proposed.<ref>{{cite journal|last=Faundez-Zanuy|first=Marcos|author2=Juan Manuel Pascual-Gaspar |title=Efficient On-line signature recognition based on Multi-section VQ|journal=Pattern Analysis and Applications|year=2011|volume=14|issue=1|pages=37–45|doi=10.1007/s10044-010-0176-8|s2cid=24868914}}</ref> The multi-section approach consists of modelling the signal with several sections (for instance, one codebook for the initial part, another one for the center and a last codebook for the ending part).
=== Use as clustering algorithm ===
As VQ is seeking for centroids as density points of nearby lying samples, it can be also directly used as a prototype-based clustering method: each centroid is then associated with one prototype.
By aiming to minimize the expected squared quantization error<ref>{{cite journal|last=Gray|first=R.M.|title=Vector Quantization|journal=IEEE ASSP Magazine|year=1984|volume=1|issue=2|pages=4–29|doi=10.1109/massp.1984.1162229|hdl=2060/19890012969|hdl-access=free}}</ref> and introducing a decreasing learning gain fulfilling the Robbins-Monro conditions, multiple iterations over the whole data set with a concrete but fixed number of prototypes converges to the solution of [[k-means]] clustering algorithm in an incremental manner.
=== Generative Adversarial Networks (GAN) ===
VQ has been used to quantize a feature representation layer in the discriminator of [[Generative adversarial network]]s. The feature quantization (FQ) technique performs implicit feature matching.<ref>Feature Quantization Improves GAN Training https://arxiv.org/abs/2004.02088</ref> It improves the GAN training, and yields an improved performance on a variety of popular GAN models: BigGAN for image generation, StyleGAN for face synthesis, and U-GAT-IT for unsupervised image-to-image translation.
'''Subtopics'''
{{col div|colwidth=40em}}
* [[Linde–Buzo–Gray algorithm]] (LBG)
* [[Learning vector quantization]]
* [[Lloyd's algorithm]]
* [[Neural gas|Growing Neural Gas]], a neural network-like system for vector quantization
{{colend}}
'''Related topics'''
▲== See also ==
{{col div|colwidth=40em}}
* [[
* [[Ogg Vorbis]]
* [[Voronoi diagram]]
* [[
* [[
* [[
* [[Image segmentation]]
* [[K-means clustering]]
* [[Autoencoder]]
* [[Deep Learning]]
{{colend}}
''Part of this article was originally based on material from the [[Free On-line Dictionary of Computing]] and is used with [[Wikipedia:Foldoc license|permission]] under the GFDL.''
Line 86 ⟶ 141:
==External links==
* http://www.data-compression.com/vq.html {{Webarchive|url=https://web.archive.org/web/20171210201342/http://www.data-compression.com/vq.html |date=2017-12-10 }}
* [
* [https://dl.acm.org/citation.cfm?id=1535126 VQ Indexes Compression and Information Hiding Using Hybrid Lossless Index Coding], Wen-Jan Chen and Wen-Tsung Huang
[[Category:Lossy compression algorithms]]
[[Category:Unsupervised learning]]
[[es:Cuantificación digital#Cuantificación vectorial]]
[[ru:Векторное квантование]]
|