Learning vector quantization: Difference between revisions

Content deleted Content added
No edit summary
Line 13:
==Algorithm==
Set up:<ref>{{Citation |last=Kohonen |first=Teuvo |title=Learning Vector Quantization |date=2001 |work=Self-Organizing Maps |volume=30 |pages=245–261 |url=http://link.springer.com/10.1007/978-3-642-56927-2_6 |access-date=2025-05-23 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |doi=10.1007/978-3-642-56927-2_6 |isbn=978-3-540-67921-9}}</ref>
 
* Let the data be denoted by <math>x_i \in \R^D</math>, and their corresponding labels by <math>y_i \in \{1, 2, \dots, C\}</math>.
* The complete dataset is <math>\{(x_i, y_i)\}_{i=1}^N</math>.
* The set of code vectors is <math>w_j \in \R^D</math>.
* The learning rate at iteration step <math>t</math> is denoted by <math>\alpha_t</math>.
* The hyperparameters <math>w</math> and <math>\epsilon</math> are used by LVQ2 and LVQ3. The original paper suggests <math>\epsilon \in [0.1, 0.5]</math> and <math>w \in [0.2, 0.3]</math>.
 
== LVQ1 ==