Content deleted Content added
No edit summary |
|||
Line 1:
In [[computer science]], '''learning vector quantization''' ('''LVQ''') is a [[prototype|prototype-based]] [[supervised learning|supervised]] [[Statistical classification|classification]] [[algorithm]]. LVQ is the supervised counterpart of [[vector quantization]] systems. LVQ can be understood as a special case of an [[artificial neural network]], more precisely, it applies a [[winner-take-all (computing)|winner-take-all]] [[Hebbian learning]]-based approach. It is a precursor to [[self-organizing map]]s (SOM) and related to [[neural gas]] and the [[k-nearest neighbor algorithm]] (k-NN). LVQ was invented by [[Teuvo Kohonen]].<ref>T. Kohonen. Self-Organizing Maps. Springer, Berlin, 1997.</ref>
== Definition ==
An LVQ system is represented by prototypes <math>W=(w(i),...,w(n))</math> which are defined in the [[feature space]] of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.
Line 14 ⟶ 12:
==Algorithm==
The algorithm consists of three basic steps. The algorithm's input is:
* how many neurons the system will have <math>M</math> (in the simplest case it is equal to the number of classes)
|