Learning vector quantization: Difference between revisions

Content deleted Content added
Reverted to revision 743846855 by 81.204.51.172 (talk): Test? (TW)
m The algorithm was describing an unsupervised method, so I added the use of the labels. My reference is the link given at the end of the algorithm section
Line 18:
Below follows an informal description.<br>
The algorithm consists of 3 basic steps. The algorithm's input is:
* how many neurons the system will have <math>M</math> (in the simplest case it is equal to the number of classes)
* what weight each neuron has <math>\vec{W_iw_i}</math> for <math>i = 0,1,...,M - 1 </math>
* howthe fastcorresponding thelabel <math>c_i</math> neuronsto areeach learningneuron <math> \etavec{w_i} </math>.
* andhow anfast inputthe listneurons containingare vectors to train the neuronslearning <math> L\eta </math>
* and an input list containing tuples of vectors with labels to train the neurons <math> L </math>
 
The algorithm's flow is:
# For next input <math>(\vec{Xx},c)</math> in <math> L </math> find the neuronneurons <math>\vec{W_mw_m}</math> atwith the same Label and take the one whichso that <math>d(\vec{Xx},\vec{W_mw_m})</math> gets its minimum value, where <math>\, d\, </math> is the metric used ( [[Euclidean distance|Euclidean]], etc. ).
# Update <math>\vec{W_mw_m}</math>. A better explanation is get <math>\vec{W_mw_m}</math> closer to the input <math>\vec{Xx}</math>.<br><math> \vec{W_mw_m} \gets \vec{W_mw_m} + \eta \cdot \left( \vec{Xx} - \vec{W_mw_m} \right) </math>.
# While there are vectors left in <math> L </math> go to step 1, else terminate.
 
Note: <math>\vec{W_iw_i}</math> and <math>\vec{Xx}</math> are [[vector space|vectors]] in feature space.<br>
A more formal description can be found here: http://jsalatas.ictpro.gr/implementation-of-competitive-learning-networks-for-weka/