Content deleted Content added
The final two code vectors are splitted into four and the process is repeated until the desired number of code vector is obtained. |
Holzklöppel (talk | contribs) Rework article citing non-primary source. Add better description of algorithm. |
||
Line 1:
{{missing information|general information, usage in the field (mention cinepak?), optimality conditions, choice of {{epsilon}}s, model instead of training data|date=December 2023}}
{{one source|date=December 2023}}
The '''Linde–Buzo–Gray algorithm''' (named after its creators Yoseph Linde, Andrés Buzo and [[Robert M. Gray]], who designed it in 1980)<ref>{{Cite journal| doi = 10.1109/TCOM.1980.1094577| issn = 0090-6778| volume = 28| issue = 1| pages = 84–95| last1 = Linde| first1 = Y.| last2 = Buzo| first2 = A.| last3 = Gray| first3 = R.| title = An Algorithm for Vector Quantizer Design| journal = IEEE Transactions on Communications| access-date = 2023-12-28| date = 1980| url = https://ieeexplore.ieee.org/document/1094577/}}</ref> is an [[Iterative method|iterative]] [[vector quantization]] algorithm to improve a codebook such that it will be [[Local optimum|locally optimal]]. It combines a splitting technique in which larger codebooks are built from smaller codebooks by splitting each code vector in two with [[Lloyd's Algorithm]]. The core idea of the algorithm is that by splitting the codebook such that all code vectors from the previous codebook are present, the new codebook must be as good as the previous one or better. <ref name=gray1992>{{Cite book| edition = 1| publisher = Springer| isbn = 978-1-4613-6612-6| last1 = Gray| first1 = R.| last2 = Gersho| first2 = A.| title = Vector Quantization and Signal Compression| date = 1992| url = https://doi.org/10.1007/978-1-4615-3626-0}}</ref>{{rp|361-362}}
== Description ==
The Linde–Buzo–Gray algorithm may be implemented as follows:
'''input''': set of training vectors ''training'', codebook to improve ''old-codebook''
'''output''': codebook that is twice the size and better or as good as ''old-codebook''
''new-codebook'' ← {}
'''for each''' ''old-codevector'' '''in''' ''old-codebook'' '''do'''
insert ''old-codevector'' into ''new-codebook''
insert ''old-codevector'' + {{epsilon}} into ''new-codebook'' where {{epsilon}} is a small vector
'''return''' lloyd(''new-codebook'', ''training'')
'''algorithm''' lloyd '''is'''
'''input''': ''codebook'' to improve, set of training vectors ''training''
'''output''': improved codebook
'''do'''
''previous-codebook'' ← ''codebook''
''clusters'' ← divide ''training'' into |''codebook''| clusters, where each cluster contains all vectors in ''training'' who are best represented by the corresponding vector in ''codebook''
'''for each''' cluster ''cluster'' in ''clusters'' '''do'''
the corresponding code vector in ''codebook'' ← the centroid of all training vectors in ''cluster''
'''while''' the difference in distortion representing ''training'' between ''codebook'' and ''previous-codebook'' > {{epsilon}}
'''return''' ''codebook''
== References ==
{{reflist}}
{{DEFAULTSORT:Linde-Buzo-Gray algorithm}}
Line 23 ⟶ 43:
[[Category:Machine learning algorithms]]
[[Category:Artificial neural networks]]
[[Category:Signal processing]]
|