Content deleted Content added
No edit summary |
|||
Line 8:
An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application ___domain.<ref>{{citation|author=T. Kohonen|contribution=Learning vector quantization|editor=M.A. Arbib|title=The Handbook of Brain Theory and Neural Networks|pages=537–540|publisher=MIT Press|___location=Cambridge, MA|year=1995}}</ref>
LVQ systems can be applied to multi-class classification problems in a natural way.
A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009)<ref>{{cite journal|authors=P. Schneider, B. Hammer, and M. Biehl|title=Adaptive Relevance Matrices in Learning Vector Quantization|journal= Neural Computation|volume=21|issue=10|pages=3532–3561|year=2009|doi=10.1162/neco.2009.10-08-892|pmid=19635012|citeseerx=10.1.1.216.1183|s2cid=17306078}}</ref> and references therein.
Line 28 ⟶ 27:
# While there are vectors left in <math> L </math> go to step 1, else terminate.
Note: <math>\vec{w_i}</math> and <math>\vec{x}</math> are [[vector space|vectors]] in feature space.
== References ==
|